var/home/core/zuul-output/0000755000175000017500000000000015140236511014523 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015140242064015467 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000211025415140241714020252 0ustar corecoreCikubelet.log]o[=r+BrEZƐȒ!ɦ[M cSy-Hgf1pgRIYgi&mow|_v-VgY񎷷?.y7?ݾ}zi.^|6Jr_>c^*߶Y٬:|fu<ۭ_x~̎+ޜ/8_poL_bڞֻ];YoZO(_-V,<xnƙQʀClxv< |N ?%5$.zٶ'p~U Pm,UTV̙UΞg\ Ӵ-$}.U빱އ0* TQ0Z%bob  oHI\o.f/M1FHdl!و4Gf#C2lQw/]BPIjfkAubTI *JB4PxQs# `Ls3@g(C U {oDtiG'z֝$,z#fǛVB} eRB0ROU>v5gCh31 )Kh3i J1hG{aD4iӌçN/e] o;iF]u54!/^s?Hs,&,#hB\;ErE& S/ZXHB+Wy|:iZ~hal{y*:{]1o:햂b;-$JRPŃ*Լf`i7on)&c(^0"$5ڪҾη*t:%?vEm5tq3Cyu G~qlN?~| nLFR6f8zW_yYd ;s44|Cs4U:O񨡺ePӋ&6jGnL!?lJJYq=Wo/"IyQ4\:z| 6h6dQX0>HTG5Q/uxUe 1ė/5άRIo4T0ٔfH_W 7x7{VkϏSݰFNw`i7LdW3Rʕ"uZ0E`,u{C'F\ъ.x3M2ֻx<ć;_ʧNs9[]zC.&Xz$AX0-B-lNv*_]d3N^[-הp|A*Z*}QJ0SqAYE0i5P-$̿=_d^"]}Z|)5rC jof'(%*݆^J">AMMQQ؏*NL ߁NPi?$;'#&立q\ >hl%}Р`sMCכAztԝp ,}Nptt%q6& ND lM=ָPZGa(X(2*91n,5/0KN_Ď6>Bߔ)bQ) <4G0 C.eTEZ{(¹:-“lՐ0A_F叻l_}z?1:NWClΥ:f 3 JJ5Z| &W.O{,Z8Y CEO+&HqZY PTUJ2dic3w YQgpa` Z0΁?iMPc_Ԝ*NBS` R:~U jځU~oN9xԞ~Z%>^CNfW)F5d,0SSN#s ayiBq)u%'4 yܽ y[0̿2ZҘ[a-0V&2D[dwl*?%|L pSRޔ8NFz|&8@2ƭ1-RN%?i¸ `eH&MJ!&ᙢ(<<-ja0Tazkm{ GYәW}U>>a~ W;D=;z|AAYGO"葋>hIQ\%$:b$spd.ZrͰ4j8!*(jPcǷ!)'xOm| b- 9ܯj 5EM9T3\ܧk U։njs2 (Қh)V}fr3,4BoXkkJ_+>6JYǹ>zs;tc.mctie:x&"bR4S uV8-0%X8Ua0NET݃jYAT` &AD]Ax95mvXYs"(A+o_ *{b }@UP*5ì"M|܊W|uJ{mL=dN'DǭZrb5Iffe6Rh&C4F;D3T\[ bk5̕@UFB1o j/}KXg%q3Iͤ39(&ʤdH0Ζ@.CPS`xiP(.T)#ia-64Fg ʹ7TWǃb!' K#XoV甬6xڂ I &m>AtĘ5dw9}ŒEalvVZ߿c}!O,ƍ7ͱ?9].ۿ뺶ypy͟מs{(99x9O6]tGLS0l/LOKcQ.os2% t)Eh~2p cL1%'4-1þh[;:>OM=y)֖[Sm5+_'cjf `~ߛUIȏvl.4`P{h056 9wo ^?sʫ"nK)D}O >%9r}1j#e[tRQ9*ء !Q[;4j39]WiZSس:$37}o$[4x<?JL֗J)oEv[Ң߃x[䚒}0BOnYr猸p$nu?ݣ RF]NHw2k혿q}lrCy u)xF$Z83Ec罋}[εUX%}< ݻln"sv&{b%^AAoۺ(I#hKD:Bߩ#蘈f=9oN*.Ѓ M#JC1?tean`3-SHq$2[ĜSjXRx?}-m6Mw'yR3q㕐)HW'X1BEb $xd(21i)//_і/Cޮm0VKz>I; >d[5Z=4>5!!T@[4 1.x XF`,?Hh]b-#3J( &uz u8.00-(9ŽZcX Jٯ^蒋*k.\MA/Xp9VqNo}#ƓOފgv[r*hy| IϭR-$$m!-W'wTi:4F5^z3/[{1LK[2nM|[<\t=3^qOp4y}|B}yu}뚬"P.ԘBn방u<#< A Q(j%e1!gkqiP(-ʢ-b7$66|*f\#ߍp{8sx[o%}wS`ýͽ>^U_S1VF20:d T2$47mSl*#lzFP_3yb.63>NKnJۦ^4*rB쑓:5Ǧ٨C.1`mU]+y_:,eXX맻c5ޖSwe݊O4L)69 War)|VϟT;Cq%KK-*i ѩQٰ`DݎGu( 꿢\cXn }7Ҫa nG{Y bcWa?\34 P U!7 _* kTuwmUr%ԀjƮĀdU#^ۈӕ3ΊeBO`^}ܖj49lnAvoI "%\;OF& wctغBܮl##mϸ.6p5k0C5PdKB g:=G<$w 24 6e/!~߽f)Q UbshY5mseڠ5_m4(sgz1v&YN2姟d4"?oWNW݃yh~%DTt^W7q.@ L⃳662G,:* $: e~7[/P%F on~$dƹɥO"dޢt|BpYqc@P`ڄj҆anCѢMU sf`Yɇك]@Rɯ?ٽf? ntպ$ˣ>TDNIGW .Z#YmDvS|]F)5vSsiExţ=8#r&ᘡĩDȈ\d cRKw*#zJ9tT :<XK*ɤwoJarExfKB4t@y[6OO6qDfEz]1,ʹB֒H ֱw;SpM8hGG&ƫEJި_1N`Ac2 GP)"nD&D #-aGoz%<ѡh (jF9L`fMN]eʮ"3_q7:.rRGT;}:֪a$)gPSj0j3hLư/7:D-F۶c}87uixoxG+5EekV{:_d* |a%ĉUHSR0=>u)oQCC;^u'}8H0]+ES,n?UU{ x~ʓOy_>?/>l8MrHID2VSsMX^"NۯDc558c&'K0L /C5YDqNe~ض˸nErc֋@aw*r܀0 a {RQXV-/p:MP\<=<^越a/bz?ܓvjIg3MN4:]U]STa,@OKdĹgJ8@o2k'Hr~4Z(I8!H G8HNW%1Tќ^?G(" 뭗R==9!nKErHc1FYbQ F;v?ob-ڈFalG*rEX}HAP'Hҷ$qM9(AHx!AF 26qxCdP!NZgҽ9l*(H Žڒ;̼|%D Ɖ`Pj . ֈ,ixp`ttOKBDޙ''aLA2s0(G2E<I:xsB.ȼ*d42I:<ŋu#~us{dW<2~sQ37.&lOľu74c?MՏړ@ -N*CB=i3,qjGkUտu6k Cb8hs&sM@-=X(i7=@He%ISd$&iA|i MiʏݸT{r[j顒x.Ƞ"m@Hy_I )j|s#RGI!dTKL&4K>#stV \'xMgaSZNg8>e!^f%cYr]qs:"̊;isXa]d+"v=x7p.fZCg_Ys;pE&\U}ܫSh])qKYAـhhdEnU14&G * QIQs;rԩ.k83֖8Muqu_48dHܥlWW q>fu6+'}xu\Veelz`Zbym gp8펠ˋֆ:1IC8qٞ\vXçL ]X/r}7O}Wh,h ;RQ=]u00yiC۔I^3!?H6iUH:ô 4P$rT`%2Aq-֢׍qt=@x#~0)p# ы9'iri]ͪ/@繁qVGCڤr,DihB ,m 9 _$q3= A$IC"6g^4e`Xo(D*6"^eTh'4xpFڜe'fVQ7~'c L^ԯwIڣA.}H;Ë*׬=`^ 9]r鐃 -Dfi2|QwZk‹u^6DQ1&H凎c!n[mi3)WfsF:M"uҷs.1!뾧1%s,hQs|hx̗3%*v9(I;:'>uQ+v)vR/egBhAAdh]4H:nV$tHI98/)=mͭ ڐn}}~ק?g_6WĩDRc0]rY9'z .(jHI :{HG}HDN`h7@{jnE#[dz;n#y 9D*A$$"^)dVQ.(rO6ӟZw_Ȣaޒu'- ^_,G;U\cAAz7EtlLuoXuA}bT2H_*kIG?+o'^Y&,mCM)eu㠥Ѻ\a}1:V1zMzT}R,IA e<%!vĉq|?mtB|A ?dXuWLGml?*uTC̶V`FVY>ECmDnG+UaKtȃbeb筃kݴO~f^⊈ 8MK?:mM;ߵoz+O~e3݌ƺ(ܸf)*gCQE*pp^~x܃`U'A~E90t~8-2S󹞙nk56s&"mgVKA: X>7QQ-CDC'| #]Y1E-$nP4N0#C'dvܸȯ.vIH"ŐR ;@~y>Kv{) 9AG ćͩ$.!б~N8i"1KФ\L7/,U@.ڮO?mُa ې!rGHw@56DǑq LA!&mYJ*ixz2*{_;IYJXFfQ* 0kA".mݡ"3`Rd1_u6d逖`7xGMf}k/⨼0Κ_pLq7k!dT x삖A7 u/~&ӄMu.<|yi I?@)XJ7{ޱ?Q]{#\4ZfR-dVaz./f+yGNMGOK?2_~3\z=y}^G$*A! IcuR.o=MZ9zu b#s9@*иrI@*qQN||Ix;I}&ݢ6ɢ}{]x}_o>Mm8S]~(EX{S_uM Wi·T"^'~i6֬:v~m!҃=pnU5:ZoXZlvwohbclwůd >04)6fbMPTi-"] OQO*(qco:}*CL™5,V:;[g./IBfJ9u+Q·B"Q F>wTLAtUG>)d_ 8rcM6MY 6seAU9c>X؛f~TTX):UiM2r:2{q:OڪR7s )B: ۊZlz6LHY g<cmN:85Qt0E_fNTU*K&+/5q0);F74~'*z[\M-~vg&?  p%nNS)`0'ϠNa%t"8U [tJf;z^>%YdE6c>Ql~J6J#`~#Eh3ŕS,|HVh7m]Q!ӥSVL l)vJ |0M>8l WIf|\8U*hг˙r,3l'^  [}r?}W3Q#vS}ll>ŰAVG YңK\rnɪq(u$Bk|TUН}h!8l?= S7!8bdEuK<^F hxndSD,Xt]1Gm*h%)(=XUra^&6"BzƾH( ."u>.,SzbQ!g:l0r$ضz]'.!-r"1MCMu(KP|еcLqr/Hi%(azry !5yY~ :t>g:Y #)#vǷ!BY)Hc(<|q@OIW^9oJbN;sö!`hDZod@JLw9|fb>ѺRce wy\Mڗ鸳A+A.A~&'f2l*q0?âZEqrOt \5Ǩ >tf@Զ~[ن@a0(.6%D1Vga>@'@—>9VAze"l |d;)=lv$5|vg>{=u^K+Ȫcv/w#4RvX :锉vgqcU!}xF^jc5?7Ua,X nJV/!; 瓠 qgzkX=>NSAWE92' u`Y1%rXgs+"sc9| ]>TS"JرWBΌ-zҦՅv;3O'CX}+20G@U#=7) ^EUBuY}rçң~0Ac`u0RljRL9Ug6rErtQg\0P8_)Y]g5>.1C( .Kyw3ޯ$Tί :7u6o;[mBPIF5>H;I.w7ߍ|+qUߤ^oåyx^TE.7]u+wZTAjSlQBB/EU^\}Z=k-nn Q7WyTz|nˇ _qˍ[!;n ^b k[);ng]ȶM_u)O_x6Ҧ۾sp &el,\S Eb uw=&?ul_$,8'`*>q/E :Xd,RLW"Qd9JogT\1f3@KuJ'@B x,kA k ^d kYj5Ah1T9!(*t 0'b@񲱥-kc6V'Ó5huՂUMpa.% qZBh]Q; Gd:|ؐ3$ "6meO>Y?HELkY<ZP>8YAC| w#Dr. "h l`2@K$`#XtJ^ zDpC65]K[r0Z;`^ʁ-G$\~&Q;e[Od  ^g0uE~ۊ$q9`]T#CJ1Ǐ9?M8]o2seXVt=e!`JU#y8@*kI0{G\ 2v[{!fRБBmLaCfKywdgb񾍠z}(.>LC,HI~'.ObKjoJdO UDp*cj|>z G` |]}4:nq!`{ qBPu(DihU9P!`NHɩ݉S-^pşCx$BBRJ@ѥuȑz.#&UݠmF̤@U' M6MY0/r: *s5x{gsə$ԙy=Ejl1#XX۾;R;+[$4pj2褴ƙb<;dM-}T\%.}Ta@ iS*XK!\+ xJpΕ`p~mΖ/K 8Wu១btM4cxb(9A `aRs ^d6GXA1?V_mW puȇ;s`uih F2% [U۴"qkjXS~vWP~ RTMrQίE|S`poSOfb|RK> 1Gȩ f,( =X?D5Ir񾅶X/SR?|a|>lX9t>sgv"4z,rKrNS)`Őr\+!!:ViI[/ѩ آSNEYdE1LӰWlQ&Z!hoҒ"HgKX 6 -{z{ҌlNeb]nltwfCEI"*S k`u>8Rj.2hgWsE.,uDΡ1R:Vݐ/CBc˾] shGI 0Os\l|`pΕ q-ˠ{G\ QuaBn|L@dr_Wec>IdA Od[jlF=XJ|&+-T1m8NP٤KX1tr:lDWx"8@*4*X FQG<^^7 vq&EwQű&؁6y\QbRN s>U^Omlz3;QϢ*b:}9VIVH|08W12e_ʿd{xlzUܝlNDU j>zƖݗ!0 hDԘFLUb.u6dX)"]lj.b:|XU O\_JK\?}2:uGL.xllT_oqqq$p;Ndck[ Rh6T#0d5`+A>ΰ-8sѹ V)L>$ćYIu]dsEGزM+ycF;b$d:z4 \NR#cDB/eW< gb`[c1髰?(o$[e(lȣ'3K?vAq&*RZM}pnse~ Fzq*8vz;j]ln2e UZAZ/+QD4ͺ|@lX)ĕ̽=T/se+ϙK$S`hnOcE(T#|\*& >Vv:V^4R7{{u%[^3{GGL']9~yL :!czbv}2/JIz)OfJl˪VWF fK6%OE-k 8`l[<-})xX_]nlmY [#U jt*h2VHdX)zS&QsToml!:WѥLf2HHh19Un# zLd%~gj|Lcw!ޤ&\Tru{//q]qY7߃뭽*{Q=^]rsu/*})7U6eq\^{Q=\ɝf-nn_}-򲺺VWKpx ˻ ymGCCc^ M7ۊ$WOjb[ n."<׃#xuCva&cF %ڋAP0Ώe/aFc% ^,T6?h6 E{lc|T=J1t|VMCm8N$@"YW\r2aR|=gi-b)>Ug ux1^^Vgeՠ7yq>Elq*EHC78NE@UMc8>`Tvwm#_} %M6q2co;M)WxU,_iY$,vvfB#+$1ZCh~'xGIҸ8OX&u`6E)*I~g~jP`yb?V]xDcURq;,ʸOM9XI!W($F(}sk*8;nO/yTH!Uaj7o!>U8)D{QU).& pHh5z\u1ރ?VUn¿V$bda2 dף AO= 8(G3EEZS?^,We?"'9Q_U>xM58ꊣ$&l+YJ0&e R\;#4/}Q^RZ$ss$0rߐ&6ap1%μ2tgUXQPx6} AC2;9^a8A֗yy~ǓiH}7`ߖmlΈ|0Y!h:~UK) !"ZDƇAݴcmX@0Lxw"C(ֿdЙ%$PRsG%d!fMLceĪclzS)#AcSݳy^蝻Bȝݽ)+?R{pⲆ? ueდ5 'ȓ<ЧHlGA-IKjMVHAҠN_nF1:͛JG9(nvkXIK\ђ8C'5Sl \fڥy~~x"`TZ;^j$/lÃWɥn]R;JķSRqSf誜}x_OhF`涔>ڻx{H3gPZ769TV`E]|ŒFb[` Yіl1@U_4ZUlw = :8ۜ(];]sy}x!]ۺGUۣҺl{#ص *6̰S7"GhAݭvaFyH-ܺ@V#ݿ&Zf )/0V /FDRچh`kzH4I}Ho^ Q!ظh,MmI 1|R{Q_[K"-: ׉S*תnঔZ]zɕ)[{;"#YZS&D+f3=-BlK)۫GS)!mSMgKN @D=5Ql9HQ˶F7q 67v"ZQ(l6yy}g3v W؇*{+]wI߶c Ϝ;5LgjD| ] e;nV/Z*Xt=}?ɫ8<,AN*lUhWݺAE)BQВZrT<ժ;"?0T *1 89Guۣ4խiÅ`?/fyPhT+´ٖY4ME̍B fّ{n䙑p ƹk>c"tu'h!-5hb7mkC* f%9঩Wm*!ڒ U?(P mpU h]΍ }ɫM\UCZ%H+sUX␵ .rU5*XʰICtC*S!w0V ]גi\}Kz]ɖHS[OZI*߹&EDr\8m+9~Wuy!"][c.u 9wM7 &˳`Rl%>놱L+HKeTkрXv@ڨ3|7k)ԬA*D /\}2\4I2Dux[ՌopAq V= aՁ;lAtu!Uu)T4$Ҟn=9$+HPFrEwKv.]E(\w'=V橒{ #G$-sFQqY&Cz b{?[]jp##OͧG P`@C NM\M۩yyWYA""LD#?C<0l\5@Vm}Ǒ񭆼iDAFpqOۄ<{1YwxmZ5 ߓA&4nm؋nfNr0W}xzNWqx,=*~039"v*xEc:=F'pd'IKaօ0u7ku߲~5%[`.{RCJ4.Os!w)sPWLnOA/ݩ?"`|&V=ڤvȗ=\:0nfpN.ܵ+`ŸG{|Tuv_=0As!dٴN*1 wї;.C}C(ON҂}82 sbt{`MC pvb⺽P@TPDLH#pP^9"c_gSD! 3x ·,nUH"e5Wu X\"& D!6EDua0yOQkOMBB?aAyzx]3 N-@ÿK}0Ɩ#O䮹۔W{a<_80tA}d@, ~A(?G>B ~O(p//׿\|nu~ї OAN'Yh~[Uhص;gȜ 6<'Ch`84H#ѱ]辤'tXdύ3?4v߿?xa! xlaڤ~Y_B/?!^C 3,[Rg.t9 ]}aـTؿwn=KS_]&$ޏO@cE#x7PC::ȳqX9YA\9/۳!na,xLop )Fv. Row# Jv)X?= a{` Q TP&s0*Yvq܄0Hޱ Evhȓ>v)_0K_hFi u Wҿ:|0}}q<Ku0aHY(M)fj(ZGu6Yj4Ͳ,6H1F8MeR-52Y<b谕KVSB*uc$㲘kxC8 TN$D:gSk`!fg)&T:9[FkQ:P](Yzv<܆#5` ـ'f6Cy7t<3/`FC(Ϣ2g:f&"ai Xo Ou3guCnNumn[oNyZVk>cFmXD G *1Z;Z3 Ң<|/A~H^5¹?`ѩ/e;f¶ prM/# |29A}%|u|p6.oFi9ח %KؕaT^^fx ʡ0twPp rUo`ܡY:<1GFa%T QpR'C>l/d#($ yq0>-܌(lA$iỳZ|zf5 O8JN_|=Wĩ`0A~*VD=Һ>Lxԅq/#}$Xm?:s^?(QD ] ;ZjSn}8 >ݚz74n6 i g#ngLB݈=grϥl`}Fn*#aS9Db<,?Mt zH4+f`!Aj͍J;}?Mѡ(ǀ7r3x)DF!2(yCS n E0, z]oUw^mz[nF|xrПE}d ̳;Arצk< h$67%#Wt 7T-7HQ:zp =5 Ǫ"GaP鎣J:tq;[Svױ}n3lZl{Zn'Rʶ ]'݂Pw{BuH:zFDB- ߞP7B'oAhNh<` BuB- '4܍[ [ʷ'F("| B:b BO$Tl tET-K }AhˬBPg4cS]̊ 㩼Ճe:M#t&%7X.HrYpTR)w}Iv-MxaʵHc!ҿȷ_R6i8 oq8eǪ)s oyWyϖ_*x7@èC^4([ A8z ɨ&_,DZ(ZrSyXd8/dd<4~ZqO\Ry\X&h]0i1Tvq ԎQ3Q` T,Q*]!" jN'hw;pqh[CH1g,(xH, -MoA=ߊ,iIRUA5ɓl *5 #L㚶VhǣS?BZH H)-k"綔2£ vE_ꮿi: q6pIezYѢyͭsh7qQ&C틇 Y"eX֊fnwX!//8S/_eӬ'6z?Wn*/bFĝ ꪘ`dڇ Vx>xO'KI? Q d0r  :~WIHp^YG$K&q>UtJC3w\@1m@.jߖ1*i~ ;ClʂUx z!S }Qfd]\ F쬵p> ^ ǴsK\"E\bFxduWǔJK G 0#f,8x_2W{WJ[l})mU? ;A02G__;*p:Yt-bj)(ˤ"1}eH:(3[m,&^ ugTQ:<0&(x@Ƽ^% V!p4,0yt^x!XdAˈ;&1:ZK<_oZ:G\~ ˽歘zhJNq9>v!5 te7L5kLx\=dR݂I֕ԸDk${O쁾ۅ&U\Tt|iSUz u%Ñ8u2OV3}Vkbj89+nrą9K d+Y5Gs'i ;HT3?L;eL15+?O?܃+kSa'gvfȅ>5l|zynVi|U'.QpNix+GӌlI M&o_&Xgs-PVqe fڪ~18q-hG)ʯaLSֆs$m4"}UQ-ly6}l\a4iIcko0*AcJtK P.yuݘo 򮵷#|V0fX bd/OE*$-9o%%UCHխwb_ ׌OO_ *ǯ/~c /ۏ#R6c+%ܼ~~M3i+n~Go_[7ͯ_)7-o7^;l7|@9?l7q{;_eZ7Z woo )#!i;7ʹ9_9_ EğͿLzm,yޏ,ȧ0tD(jquLS}5=V-&tf~'evZ2cVF|M_GP!WPsJNW( ^ZrZ4Xy_Y0GM1r8PL74eo 551/hovAZoJN,gJQ/a?:Q#TfKQ^$ ot #h ^%9j^vIph3*%)W`P g١@5*hCRXS}!EӪ`X{]lEA* ε (0B_+VrE$Ht_<$I3G0I`,:v,ՔL$! )C#Rg(FB8.Fńi!Y ¤V" fq ;2ۇ_ hdž:F3_`Y281)W\ =Sc&($ELA[>i^ 1jtL) v1pjtTˈNYs ETdk5I["LXS񚢑1TFj&>8&fXPZ(ZE%DLӄf) |ΫLGZ/5i+ Dc 1mIRfXn-DYsP+gh$:U4dW-2 8ٱNĥ Y0)8PIGpI=c7 Ѣ)%+&ЊU!{y͌;XX0)Z(2x$^iEE6iG9V1 RŊ `MPAÿT!I/`Ǧr1 NPDMjRb}LhM').tONtqgD:'tWKba|mWBtY" EˑFO vf$|LeRe v5 :{e@Htux?)~~kos삣ԃ[NmˠkOoJTdhiK-c8-F7\8XG& C+):lPVԕE(.H">wјEuEf,rct|QcLjNDږƠ,hD C_|bM])͗P$5څ4U@!:W0fP)p"GJ)Q/)b^rz͈S2J)i-DS"4I/0.)8.n J"V:HcT\ˆLU"iV-#CS~>/Yu .sD) +zNyN4hR3=u.fýQm 1<6j.ˌ`HX^ЃgYk3ànxg +u" 1/^({83zpR|J)!NdXGżKs1oABIͻæ ~lwSע\9Xʨ-1\z0)9R(z`hXT!?96bue?(.QCjiURj0H4K?G7d!ei5d9KkALI=AU 2-kPM\<E҅юf]ڤ/EhͲF_$Gsvpz֞s|]p\Dv#i-?U}聣Йmd3 UI4oIA3-n;wApp8sW>CKއ]{˹~*luAnp6|mx;BiIRz7 V.Ldr`6p4fc"Z1>A LǏtOcPɶ*xlIZ AP4߮KʂvQ瘆6OMvl512a^(Uj:٢[!tJu~:gGزF8F.eױ {5uQ#L≐Khcn`Ɖ nl, `e0ح3K6{0_V53J)sl`>kBgY(sLbiLϵK̭^ql2]PLv-gm$xɍr܃>Zsf;U*d%!%]XÈ̫r(j 8~I]p~z#;-Ur,nI>Iet&`ʰdLj1*ɶ(>]nQq.[hzA>mf:3k1Q Dig  3w{> ѵ(زfDygIl؇nFe6r4è]+fLt5.a̺U6)veبlגϘl)BiZuxZ`Z8C90. YJB+ILRv~ꂣb _(b >38q!aHKW,Cze` %.A ƥIb[Ѳ *Ь*EP$](uqk;gI8>*4uwc7hxS2]BP$uҸbwp& ;O`ah|ɫ3WуWvv$޺Rv]p삯$/qgדڸ>O)\G>3/Ky5&~oqt|x`VQ <*" VkrѫkD#Hf CJeml+~a= ޯn 鰆^o֫ ҼgۮerG%[^kLUG8_3l rݚs>n#wWυWwW>mp4j)~xLИB[idEĘ| JNǢ  h'B]oKp~': ?1>Wqz5ƠϷ*kMH]ONb*|-Y1!әǔk09[5q6}{wW`&k/:b7]v<=Y`Z j#"f8SrȌ:Cz]p{mWg#}ղX:`(aAj"eN.zv&Fp#|zYgHo-a.8>{>S%[ 9Iע WuP7bR7n*Qi Dsa9yU֜_RʚoCR g=$Az[ I_ԞZ@/lrQ-Gc8է[׮Ϗ3t ];IMK{*v{-?de;`BΦtͬUx )c%]$HLKŻ흧pĢhO5vnͻ]{9첦# VnDpp(9|6ªh$+^CJ,f)Ee槝y#%qiYL9ts\ ެH!W-8xjQS7aaJn^+")_yO˔s3v`uѹl5k¨T#WGͱNFPҶh/T*FdKKtлGn`w|f-h/-jV֕}X#uw*Dp0%^HQtSKkO uP S,Ae fSkS<'4Eԃw bG?Bs k sYʔD)Pf;gWjĎEa7$`օ/'5׮31m s{^S 8xZq~|IbQ])QkmΚH]mLYqrB/I} |,ޔ0`]|4%D"joK"ĻC Ze]Kg,xp1\F{ DVdtaTp#v͚,k_xk͆ /0)/wqy~z7& ؒ\lf$.dD惛M_]3U/ӅC0H7h7+B `?PO0tޏN!PIjWz|"pu}Sr2?m v]ēq>@L|9 )G!i_@ " $$㈪.Fhs y+?^]h1 b@e/_><啖l߯h&pBݮ#hNphݿcᜮr8E}-J2s*1G^xw3o +㯅- ^T@M3wGJS[z,Qfn$$^E^p IRKq/7o X-jQVezIڣlP/A5˱DsʝϬPxr$_,ѽ~cfPtTDAב9آ*b9u\l$=r¼>]Mؽxlu$j\k0mq\(W0G?Nq4c sMĥʾq f쿖oW<bQp"ӜXWd%ߍ obӾ%Riߕ 9VL8UVNuOM~A2'/UcT_TDޓE Z4mVG|i3ܟ]1L} hxe(0 ռbl`2):T{~{oF) fNZ& A7r@QMu}(F&8B*%?PRCN7: rO@!RŋJTqh \f$l:)OPǵOfk%b'/?=NeNx ;[ߛ)W]C0V7l;|^YMH<SA`NQkh܋T[hL#-TcDq{5J륝Oe 0C hUb~ y+WNK%i>vW¸5^9Vji=v=FsI6-PCDa.mPTI-P (.Z gZ`;"m` Y\uZU{v ڠo6("Oljc͈fS0T%kSLmQL3)5YA!"mеݐDX Ү1ȉ-12on6Hަ=,fˎV^$`4,M`%e%Sgkzhilߘ6&E߾f=t娘tǓ0°2)ǨE6A[ԤXwz-FCg/|]u9A[YwnRf],@)Hi*'KUX>ȅXbskԥGB_ jK?<&BjD ch\8BhV  ??N[[%]@X9ۉk]ĖȶbJhfZ&39ֈITR̝@ Rq⸠KEz(.:&tDLU& -yB *djYjKP՗YUݽڀ6wc0V5 Z:2!?eE0j&G63n양Yy?JLg-}E[otNXgŋZҒrD=А~x9w$nsw]iq|`>x|k*N)v`hWL$^8㇥;"{DSw.`;c{CES_Rfd}]?օ UZ"nX7nv\^\`ZnGndFSk!USdmd;p? };1~[){R0G=J? >Ri'Úk$ᄤN{ uG% VnuǾ%dGm{v>^1]swf vsA"je.J_oHs$kَa\WY^_?_]E>Sg~ىyrUT}9wP[m%̤8qNi.䊤%T c,H i)߄@P} 4ŬM+ }Zg)$Y&:4TlhZyMgXbXu$!1!&6GmST3CZ!H:kxӊx{ME犩!U˔uEOSRAPHXĩ1iTXFSJ<5*۵iEi > \ ]0n$QT a8ŚJ()o@lڇZ(!U%{ U0Jճpt0(zMzė rm lNQ@Rd9Izt14fl] wBG9Uuo|i.)q> JL|2Y #Oߜy^@mt)n X(@U>.߂ /@ILE z1]icy>jge >ʊ ":tq_Q>sS3^W>9 m,do߻xytN9п=*[ZM5O"+lo#Ógr >ya&>i}n&ŘoMN@ӮdY`# Cl@n^t՚:T# (KA›(T!DuL 9Z)q̬]PAoaj/n"|MoӌNժ8a4&TZc̵~l6p@Q*O3׷o\)i1vI7O/x5oi ӗ0[}yXw{7"±|pWG¥|gy4Ɔr35HF հ17T n ߮J61oM =4[HA]K}B})_ƪfuf#1C3 ,gT\3JVg6 5N9^٠c$l!JEyŽLr)^%%nQx OLTd ~M(G|h7C0SI[hOտ U% 4f5@Z̗bXn~ &A%;#zH(*A4> &[k7GYJY|:Xѕ A\ <{q:F?rx%i|BG׌Pgg$4qI*U̜5ƉT!pScz>0y5h>GRtEV6s1.àYë? w tke/@mq5QmMf2H#!U]v$JQ)rRQ fMmvhC6V)X)XTI'd4kMe[YJa0N`2k1cF:f~Ěq4"V F< DTQĈ0i%&fq*m0ʹkg/D](6QL$;|53ڞWŸ=L?;?V`{ǯd{{w>5"{O"]qMJx4׽@ >C5y19!'f0LGT' Lco+%>lZ>/?.c{_7`]у;~#7g{xgq!W";[s!Q4>޽@Fd= .ZIs/8k`zLDO ̗0>^sʱ.qstUf0IWuqlY1RS7qb⠱3`bHiiU_mI¸;Bi?"iq7 \SXYrEɩ[,)ɔ-٤)bNh|vvvY.E PG\u`A6K.(d!B&sivfIq8TeiF>#Ռ=F6&n Zib3DCSsZ92p48#Be lj+HA`M %3--98A`PC`&kE1oܚvHj<GRsԛH WvE( }KT?2ρ})2)P_:!`, iJӽxA}ecR&^'BZEx@'Eb8CZ*b'N2!꙲A}[|PDeRE8,< TF@RUZCp(ږ#)g=1ܩ|,G*eCHIi,pFYW1:cxU[^GjΡc~cԜG+)3M} pLו$ϰ! p&\$@2V}Pj,wn ŕǗ%X_l\rA޵z%Za :E6`„/WRzO FⓝeiSahgp U79kWv)oV~i&Li_!R=fh-&23Y_6t 8ޣN+қNjޛNGa>d ^K舒; Ej'j[FJ?w]i4I6,tg8k|./n` Ny$))p̊~S$bv4 dMK`ՉM`JA_T]FpU[F Ďvв sw$``FYü|mR#:}a٧%Op_b0lsH'8Qe{iYo5?GļiT# =5Ɏau#uG G3yGE#E,9*JLB)Jj\Vx`;N5'ҧHŅ_(_%".QiT16شqrv-߈6] E##›PMj%eՂN;rǦCF[qK葥BX + Pp[*fP-" EjLayG`jD7֨*BNCJdm}(otݭv *c SNffhAtWˆDvbjv!܎CL 2\)XY~-Ϭ.SFr͸7\3^&cw])l>hF+ N<|\\.`#sɟ1,a,;J%T,qfB }9C׫H_I,WŶ0p0H.yʬO0Ͽ)l|β6m+qݴF]!2Ng!w5>x׍ݧ+?|9YqU݌0,jzɄ FJPKp]ѽbD:A\t h :Ƭ㩗pEKHS2dV8OɈ`Ɉm6>`^^{08;} 'eܘ|3s9$=uA^j1 ه/D! ^ 4'仩< r0-96&ud 0b7 :Mӳh8܆x\r]cPiS_0[#;uW}Rv{l׏7l#Stݘ 2o4N)7+[Tw[7Q|҂< r *d;}N8&Ǘ'Dl#(w R'zʺߞYBG F3v}=ƆΦe,"qEA\q~F%z3 5C;iSyf争=5+=Z-xr`(Ee[,huCF[! *$rh Zl>Ti"*挸鸛6/Q8\cH|)ϟ֊1? 0 @R2ׯCa)KRtl9NI/e&U29uvƏͧ 7>sZN9L0KI-0nWOGb^|@מ͇Eg d+˜̤vPOˈP]vfu$vmm(CfIL2 _41ŏQ PM}jSjDbzn[ Y-bIjsxǗ*4BlWs$7IiŗՒ)E 0[)=T`yZ<=2T;}< LqӀY2tj'9+j0xR \ԬXΆgw:_p|UI6jZ8a]OxfE<gnhW{OC*fi}p`"bMV!σ/YǛmUJau0vqKR)C2˒-m@rųjI^+Z 9T)/b3]LZANU%MqшB*g{ (7 VYIzycʂ{ijJQ,X_?~ܬ7+˱/?/7o<, ?}ܨ2ql/Ee|ìߜ_ƍ ~_n|b=H/'Zxg&&_|i^܉6Y*ֺ?m?DzXJ+yf_*)V0J~٩H~ )\{1ctPk9}e@j_hoLCp.+6@i\h3?7Q25a3jd~sVk/;2%KnSyzgs#6`n:'N;`wߌ-WwuHKFoW+_b,e]tި7M uCDZ:)H]=:L@L'k.PHa@Y9˨WL#,8hOT0T},(4v3c3΋}Gsvӭ¾Vb2#db]9:SH:N!SHDŕ4lG..l͉JŪ^ҵWcH}>pUb9pwy`T<跂; WCMRhIY8GBS+vʃP1E`mo G 40&򮲻{ov, #3ϐsDn0ip/r~6gt &z;rO#vk&w>} 5ci(p8\~ׁa< #ąI!daVRJLۜ pΤKT̵Y"=A:=wT dIzpN q !KW>jr4F s~f15@:nPz .ހk(=`_n F;Hj% D@l50_QWw]tò \]v 9'MpIngnU<02EA罷_+ʅL{d! Y.,okRpݦ5 dcAk4c%"9,A{%Q 22yYN*aQsJa΃PNgsȎLarTSA 0Ez 9)HVCc-9#Y肴ꁥDT;ͨ,gP^(IH'lJѥi0DZ1u^<)s cqZ\_ֿ̼h'Cckb,4hwPbfNRj:%"KkW;˧u}GቕD.[z. {wUTƶg󐉷n9^qyPqǖ #Du^Ƥ˅틏z Çf_h9h7db1Jq+`d҂2s,űLjju}c? ^>FkC lB#U3Ǔ :?˟)d^(UA5 BܨcRƀu~^ W7 <,Ӳ| :hpX$=SJD܊r+ʭ(܊r+ʭ(GrԬ}vAw^!le_T7Z|%jɇZ|%ì@S||%jɇZ|%hגC*jɇ#bJgP'0ڠG|0$Fz@ZrT{dN`c$1KoIdMo3)} VK &!CtR+e"8˒6BL(9#"))5ZX~“os Y'{Jz ԳSWs&<~`UZz#rQ~*̩G?观zS~SODTS~O=y҈Gr $$)IjJ$)IjJRgJ ޲RT7*W.yͱMx9of84TIqg5f<ΚF?# c3p D1)9F_? 5KE5KMRST,5KMREbXVQS\+YSTbXj*bXj*#1KT,WE{O~57~q pxf3nq0}hFôO\SGXϰd DM(~/@Ye8[.R coɟKkoC+-EGYG]VY4z=U/ mgo~lpZ|^ hf( 5f0ӓ?P}]C-~(fm1|^n;6fbXomFtWNoGۃYkeq$H))1^),Z'wڍFߏOKsfbtk08~At99ǷFӈ3"|X+qU5cI(pU]XtR=X{LZRH)1 ( LkZ,B uaRvCGT) a? N1M^M"CR&X(6y)^3kcof0PvJ=]\Ta]>5.H;L 1'/x(5Z;B9 S&sd)gcLU.M܋%ҊߏQ5ͭq4+?'L4O[{j g}-ѤO0Y2i@9o5͜2t ,JDT/$M.wdߍ!+.tŽ']^ o+moݤsz-smGhqI/=g\!4U6)ϧ6%h7db1Jq+`d҂o2p.ϹLj];2NA|3׆a ?aO6;~6„L/&GO?˟)5ýPBPM+P?(EC>8FEHV[hhPfU$< V INXݎ12wſᶀ"G`" .0FV!PBH +~6Gv7x"nG!@Q&PŸFpe-g RтIHAEDwf4"<Ȇ`L؊jI =y,·7V؞o#3ki3BS(f<{ݦ7ིZOM) ۠D?X1FtCg_!դ(.N}'JG؄MȼSpmDqY8$T_I4ӦScU?oAF#;Nǿ`8= s/3FbrFq,NOӺ!g?'Ekޒ}isRbwn=}^HHHfPxLdԚqP\7/Zc~SRk@b}7S`5e~g醋9xr]yS6PApȤH|0Lhc7LG/HQfĻ=hBȳ'oy_]ա]C&?(H 7Vł ۔yӼtmqyuz]޳?jִ_"ܱvѧ-m?z3++* Yǿѻw, "3OqĚLhV-@ }#Yϛg;HIy0\L֑@p=!CT%m!y滒\ك|Ɵ ̝2%Z lB']/yǣ]|Wu4p1yu 6@u)Է2I-+M*"R5Z}tHޯ$WG@0'dV2ٕDFJ#]O@=lxǂ}萼_]Ic|Z ˅DjCHNyC~wh*ɂ DU0Iǻ>wHݯ;o6 ) I@;C9y 0ed` 2d˒ԶR7-]T̼V5XTt[UƄGcE4 n»pz3񒐖2DM% Mj8T8=cp]?Z>VNvs3byoG4{4<Tf[7^akAR{9/[Dlmxh Z>VAx;t0E 00g~2 $6"--FVDu*iCy|1~|ӹ8TE@1e8!hdYIKTyuF_yv_ 0p,:*i G+4qcLpe? Z>V qu8?Ʀ"q qA!Iu^yJ g=9Ö_|g#$kNv6^nՀ5c6nKZ/DqI rת 'Wr-݁Fˍ56kh5^o"#ehI`H$YѠbiHC<"| 5\ 2fubH9ӮYR]? urBfİr;ۇ;ԃEP,jB| dkפF()ޕoZ_hӈr\ HhZgTd=|@.hTy-&4͘F 8x@ o-/,C !ѓ6l8i+\0X?Y1jWa:]V}R\p1 ?#-&gUq?X14BFƩOy>3n,?14 }@B|2]2juFN;})1I}6O-yZ>R)Y_ڻ:/ŸYIb"dd귵̨ڍ_R ƭi& rk>#d5NA7f bשȀP`6EyFxT! JQ!%F _>|Mz[\x䘶>  C5pb 8J@/uRkv-a"mʴ:6`jeůνo57hXyRQ6Zz,m:^s)juq /ЯBтФXGPAc"TxcK ¨XzB-)TV0~ SUS+XrAhvrӊrg67j׀ ~%@h@*Lpi|No( ҮsPmPqR~iʷ[LpYhHȯHd+cb/X7{A#-n8d`jz(r]낖-#ɣqFNJK'891{ORs~=kp/ؚ[ބ֬ y'#[/.n8Qa.+ BVC=fBw rQ1Yq3Y=|c5b:80PsG!PQ7n7&1E:StGg3w אK$ҝ;zcqGͧP @@*Ve|kPiL j /yڍ4_dSqF+Ǎњ`r"}=Z>VOVgW֩-dfMtpL>n56E(xfpPjX{d臡ANJܐ-n#j%[qW$tz3"P#4=(׿]Ů ]cO/$×`\>,>mX Re;j/qPZ oO><-לwQϙŕ(ٻUF+7Al砐K MT/54<`~|KĦ <ܯ2nAest愜xX&|:ȃ2leH} 4):\6+v8dX隨^'7@H4.*]7n x@('a>2m@xTܷAOjomFQsd۹̖̠iK댘}[ kt5wPg`]]OvWp Ec7hKcE&KV 2|GS͵(8 auD%UW*naƸQNK\ү-)MrPpTI`5uI }MiqNc#)-+ԂAm F'վɦ▵ZMϰ^jn )xSs ԅ-6`FV$O4 ?P z RjDLۢS9'8éq hãcEN 6n, >ӭTVSr^))^Py@dKj!Z v43(ܨ&`.XՌ)P&NV(mm*3œ)nx ;?ȵ|ոV'^d+a>ՊGV.0!+"EZ\lk ns\W cE+kyݵO2åQq`)͌&]ff ?zc^~>-PǣK &49q KhҠ#Ene /XrC y ne4hΆLG/HKq$lC ,/f%AY|Hs.T9y`jc E{D+@\KKnWuJP8hpݼ6 `~,*d0nk{ʨ&7٥no 0R8&N;y8FpN9 e!T:ԢPԫtqhw9hXu G q- r9'ף{G@NJV⣾:V_lϔlv^Vw9mcH"k;=̉QqIC4`[ [ƷDMPӒaNW ]0LW:+&C7<*Ddx-*gOT|B eCy|0@|8uC ~/:񧾂-CtF,2wS0 S0iR =>|)ڠ_fYwKJA5EPV00b"[*iS/Zˠ4{k~hXQg {kQ10ejݠ;3`3{W+o6o>ƦsR@'k{s.PC#axIWhm5 U:tuX[jLl/Z>Vxj3 BiV I5`n7{*dɐQb,1 ` 7i |iZ|`#C146Zm}ʧVFHUU_6PYA}DF鐏ߔ9CpzGdMHV tϥf|,b)Ӯj ɂq {u2PH<ʲ` 0]|Y^VdwU6 xԬcD9s[BPw'k۩C_$r|AAF7"G_7"1: D/x!^5)3]|k? Qs&ӕy>Aÿq}qׯв䑀`=!m~i[v4UʤD|19oňTmM>[D1 \G jCsM^}F[1'HXlgm^-.럇9{J\OS1+ayEF9(md?ylY7dwB\ɛO|ոGHGsa^7@8Vːc4:SCy݃KPӢE~ս >7 `:ɼ0zg̟z>/o=c=?3eE།*GM($^~ MoTL~zg{St8 jJ6?sF*APK>(QS`( =fynB2qԭWy^hJ 0/utf?b b²` zc w6fSzJH\n[WaVɬ-?ݭ~AĻ? )(y^7}k4\Huml~r7_M4NA&sseۛhZgJqYTpn7 _fѭM/D8SNo5.=%d"7G/Ϟ;;ڊ{y~\ /ɮ_%o?H&:hϒH%5) X+yc FV(KRZݪMn;dI?u ȇfŗ"}kK+✄K l#0LԪ`l(U4pv[P?vҦHe'FҡvػFn$4vK!]܂&me4"ɞ!%lʶe'2*"ŪՏd^5|L9LF}(OX.jYKIn([Zʤi5rcWnɏ{eMܚl ĞɆXL'[q\}?b@z9m-[0LXIPL0{,SVQ0O,,گvv?m;y3+[)B=2(4 ð1FSJ)Zd]EU=ΧcKюj?v "߄d@@BZ"Xҳ;P:NfvK\+Eu]?^r{G/r;w_&?ι i]yW` Uժzov/y>!]=9r-rk wHq~ jۏU7GUZ>$4 Q (b, *ObWhh{A`UdFH8-L¡A˕^1nwFhE׊+" )!_›r`E [t!HwhGBo(h':[#}KUߓhR ">PĹQ4Mu"9J.,Min"bTcup^:{3r{l-?##Rao86.=\Cpm3/c=Q'$R٥ ڒg%u._(zk}K'}C[10XQUЩ@z_/A_Јv+D16vb vgׇ_䲚l1~5bwȃ(?k6ࣈ,oRR^NU6G 7eZdm/ Z;~QXBŵ{TI8 >En4еFf t(4մiqiQ@ۀ"_'')]Je1;|T% \M' (IDEmQz2~M8cO6 Go`iS9*/taAqWHPm̺weA؎}f%K!, S\`M̹@`$2b&)&m}j`bM5 QHrs2vtݴ6;~iE#@m+2.Y  uTk̈́l{Tq:uD`]' *([W$3ݨ#AGW Ld{'d1G2k'Xd,#{}]Kbu*+]/xa^bㄦ&&?^zF_qmq_}uъHZ<\-p¥ J(gOHVaF">"D{~G5UTAΕ ]I;A/wiIuP+*\MIBT6:H&HuC} C|ӂ&29MIJFye$ϙ" \NX)2DD !QN\{OC|9 Qq&)<ނM~؋?@^|< 1P7snPy5R)^R7F5 R"hcj33zɂ!ГOPQ0zm|.c!IL&m”&Yn߲%:zQ7R~#cםCDDUbY X=^E&AnuTk=D!r"ۊ MbH'jd̏,iśn5+̈}C|} T yìi>^1dԦ` ೼rcgZQRA9HB#E:&äI_,=0S&o i¶?w\R[I f\Fj4lS&;5zWpbmDcNAOJ=q'*NlZLY\mK$K~?IIQTܖލ^s$az(wGc8h}F19WER!60Oq-qpp4=3WEܸq@ozSE7U 1vSbSUJ ,hIf>&+~6RG,\= UsriFac')ŻaVI>}?W-/ǛT#9Y5,9+[ u|nI$:MA''](NM5Dbz[ˣOC.zye8eR]+By8Z#]qm RMXdgʣ#É{5 -̨P5 bu74E"=Eـr{ӧV§zv7F6&zV8A5m:11m1Zrk;f(OQ:K}$PyuHT=lr52I|O8!Oq +V?8ns'l-4T:KXy vCp֜e3>?^AK`-Z1-x>%-Lf h$&RF"`-К8PBr &!2o Oaj^ >wANG&BDħtA.kPJxfXrּ [UBOv:s.Ի^+&_bt;V!^)Zi|p:1|/qZ{BA)VUQc'u]xUR?)U'?{BVIClrQ&&~9^yœz+\f+F&#m.9&Iiugc9ڨ7O۪ 5蟔KKʖ>}Hʗ,=٢ t5r( vmb ƾUǭ̽S=6iJg=!}r5q)0K걅ap֣o0}(ۭa :g/&JFUܞLu'8t@uo QpG(.RO!nz?n&nr]5 ⼺9WdqW,NMUH敳S${EӬ 4`8%1h o8yƻL]b (ª"ۖZi]g-gOΎw/98Z镧* #D {agG,ffH G?Gfh۳}Enr[5yV Ӳja`Ѣܜ],?ŴͦFZ_}suUnf< 7Y1ݚ\ rf{Bϯ =l;;;|Žx~199]6!15*^Fߨu{4:ׇ1XD;1Az7w {im* e\J՟R71 Dj Pq~4͖xX?E+h1Lӄ[ 畧$pB7A.OV hjf?W"Tr(Yw9d@hS0 5, MIa5 X)y%ZVQ+_21nm\3}xcDg>߾ބ&vwvIMܟ]\bwVbs b \oI;{jmG'>u;%=xԯƓ~c XWS!#7ښ ׻:a27hN_`:։Q@vl;}d1jF%twܹrh#jN6h` $=Ƽüv;y1=h!Q%ca3sY`8Է\-m6w?juԗn”ѤZj b5M[b;x75;a%PŞn6_|k`Kc|J?c'G6g䠺\cNNo{ȍ_~ #v d=VF=L~nI-m$3aKE6z=ϼ(`)вj냱X uvS5F>JPMZՠ`pG| (`fA} Wcu\ [%o9mpw AR<{u//'/QEjMqqTS}xp`.ًiW S!3/|y}4 "%93$bIҚnr3//8?}oQ\":hb'̋cp=AApPm<ʒ:g%tw`s.TJ܉\ 젋Gl\@[ FPWöZʨ;ϼ /]to_T __/9Aj<$f$R'Ry.8!f*_#yt`[rhΌ:bV)IV&XzAZ@"[@b Rry `<è.[ ڠ`k.U6pp"MjFd P;~WW)ȈV 0Yi ¬ʖꋭhʷ_I*JX8;&7z3MDn@"x@&K\R3Κ(` FbFc6mI uE'*];'煬iʌAכ1`tuvCeX4N_X|h]i CQW+Rww6/O(6BWfOc*Jp18,pT\d %\^yYGֽcW^y̚R'*je9먭x%0|ھY7KxΈ.CHTxhd΂M&{05?Q; @Wm/N:YЖqR8f9u=ꖄo=#ogenhMj*dW_vúf˄@TlYDx+L6 U3! f9/~*m1Z6>y7Aa Z(6l綃.Δ1 jAvcm7\0 R(e>U!9W4(S,g`|BQPY* PLDhahE\^**<蚂5daW] %{ 2,H!4/Ve?,*8##o}t6TlqsTĚQX8Q S۠ SZbn.NcT]0dE4>>(^0ps0;蕛1^i ¦͞pkl?u5|0Tr#/쪤BpCre4wq|،-v )WB(>ZSUЯB B*%/zc\MƏ0ч.f k.v!VϷm#݅,=3 gf*2InILo8(Qqb+KyRuRg)& ("^kVE%c+Ik]U"mv y[6ҔQ1.k؃3)OTS2\fKC*lO`%VY[vD@55ص7k Y 0Pr(ְ+)HeicfR?x{| ot~"30Gjxoɸ Uitadffo3>l ^Մ*j=9ة\ a4(&Xm>8*77a]xA[`ȳ~ZU{ZX.m`(}c]F]!OKe)^;)NP4)P2ԦzWU_(/wF]?=+ <` MqFY]c:JQ5V? g+lwx2nBz0Z]oST M̜`ARx#X%I"xK)ԁZojK{ Kad){fPxsB+a&[}h- =1! 2qz;f-W\=:eAU\>g4+gg]@M3`1)cU/V.oqK6Փ J$RqHkK&df ͞Qr%oP'Ir!J5ɍ*G$%%4VXSb>kԜQ_{G=ťktiHY##QCX@#9RCf*CyR]Wϟx^iCJ҂o0sduc?t+s':S> ٘/J4X'0whFߒS<ĻM80n>O/? f^诎 ,b=B{U5Q9I2fVʧݺ?]?A·`lh7! 2c\A8ES.%(6EfFzG&qCjFݡ?v{ı0-qjn>e#u5oތf!:ܫWOMgͨk^)>E0/>s /P9~|x;O~('Aܹ]Ny'(pZ}V]#YJi(I3oF?F)Ǧʹp\oʁAeLU"(0:fq7X|b7MSK)Ȅ>j)׎&fbU TA#qFI0vs1 BmBjTx}f>NLfd68PN6Cb(¿?~3/NZw W%HGқж=TތvunB;x畷W/:xw{M)zvS`ZxՖp|TTlMZA/fV@cnC@'4E;)1EdIF9nv*gFyAM?!..وhRFޞrlvZVg~d$Gx{5`K>5)%xObm3;E kJ"XNugٕE(3w(:}e鋺ߍy7&A~Ǿ*U~rp9wÂs]0 fcY`6DO]y'VQwka/L[4[ߍQGڝltw J_Zձb_=^|z7 W{.WY̿i?JTʕrCJ)Bdb` '*0`9g?N~> &O&V_}h}ыݺ2q|c~|vHٴc.wS'/)}Œ庐:Ӭ{ڞx7YW˽MQdwM5ٛ+MIVse L9׉4*AfDQVBJy5|vo썇j-m ]yvя>z7͑ˇ9rv͹ݭZoxRS]*d\)bVbuumICU_=_MfƮ %c<畭fjn+ytdqtN3T Cr Q ouT̷: w]-4X F'n@c+i,7mze:*g c8"y``rELT~l7lܥƻ,gɟY1? H"hӕMUSPT k( J >egnasMF VZx-j]4+㽎ELN1D9PL\U2RqFjiog>fpfBt2J:'* N` E7(Qڐw1?83Ti%Jq~=[4}gf:׌Am3Kw,h:g: akŔJ}]h5LIQ+g4__۷ _闼1cژp uql82a)Cvos\g^=9}; -Щ'B((o{xw+@>n k'xf߬nf+?䨒e}{,_f͸e{8'u.`2n^S8#^~v|"#c`Sݎq>T5_)!U䕎ʭnsoߖawG,Rt\,ҹ.&菞X=)4>M[v{$\x l91Bǁ>D!v HNasʼnWr-DwH+&C` |Xq dƖl4j+aQ^S"(n" .;1^Ǻ=>FMco+GQ6G%DBrif6ƀEFS^Aחc)Mc|0},G v՗:G&99H,~] wmcJo&iA$ͳ'JNz f\MC=$ Eo?nq=|sDO]7i dlj`H4iz[`ۮn?xSd\T՛:YuY1;j ˼Ng*~k H}(?w`a!:-& wb)LtRmRJAb 9h!]oc'v`ܨHڒHPN"Rvtْt:inCh@4:㥯8>|DL|J89NHͷt*1ͯ&=H :LAfB'i5QOպp3eGݏݴ Xn7á +L5UG=e]y<1^" txUThyV[(b%T`&& y>Ä@:&4Zv7mwH,:MG erؚףçs4D>Chw=nH}9 b`qwHv|E66l3h$E꙱~ŖfFmDlbZbU^n?gב`v$.9 0JF=XX3}}xC)/u Lѽe>;kV}it|әM&Y,?tAC4 0_V~" X.;e8h^ßjRu gg.k W_7zpf\*";hb$X{!cH;zE̡.n=<N?sk٩G\>ڠ}7|0~aD:Ln ڕx_:1ghF!,ȫWƒQՃuy0W_?ܼ^ލlq`+X"ꉿv6~vUpپ+BRh2@(i8 U*ramUԘs}94 fD"(U3iXDӬ\WO_2їɗ2!eL؆e?FF9@_@2z=b,p,5WW(jq%kd%f%hdDd=W!bO| .bR RXj#vD9"x rbW\ib˿WA 淂%3V<>ͷckUJo_ײ{nc̪L7-3**24ÍeV^Mc?Yv1Zkwsw YNvIOIDf0!^45_ٓv2Tb 9K0 ^mZ}ECfO<.Hk34@${ /ߜgZ/fO!L`1Cqje ل4 rp?{ b9- HJ(P)H/P>rnQ(,_)=F#M( ́Pxؠ!lpӫ>[$5jA8+HqH8HRŅk ETEM{;_|u4ˀWt ٜBѹF+P -\+eqp[_V܄)ӉK8J[Q0H1P®nh-?"L H3 0aWk55%=4ꂣ1+·}>8Xd 4$XPf,QBF(ii9IKa1LiKcwTS>.Z_vS i~{, 0L8 ⮡v)4U瑬 >_ޭt'*[.v DQ"8x6.4@q0ze@D}'6? ؔv68RH0A i81RPO/h:x|hӍ|V<Im4aeO!svvyQ8i1b{T 1wye]Y˸wۋeK (]_y Ym0F\" l0&îigg ֊i(ІDL :N24ݣM5ӿ u{Mi11D( $QHV'B}zpa9Kry tSfe ,޸8YX|63_ 8n $Qsb{ߣ݁@ x3흟eMDty9xe8xYhpS̻J8x?Du@eXdy{{%> zQu f~.!(G90r7 elV ۋ"A:ܭ]݄T&Svh'ϰ;}κdVuB2Hex%4k`($p谀}HfuBiy hˈNYe)4`Rm !PC28|p(X dd9 5ɻHTCN#iTab=A-a.I1B7i)GQD$řb<0l\4+ DRTJV62ŖFep-βy 9zhTzu_y7qW'9{hBlОf?ہ+@nq$5M=l-DD6B7:/֕r\-f~+b*"IL("-a9PS%V~Rexyp3@HHwdDRMx最}8T]4*# hgiIeC4 aHF]p$.{ߊ93\ĵHig*IW/a4@LXI:oiT];Mی߁vƭ -pv7k@m6;qGwϻ@}D"HPtDehu=Au*G/=4*S!zN :x:D)4 )SᰰFep $3_N +7$+ؕPL0dHFS`ʀ:׭,`%*VramUԘ|ݺ]d$lǬ;yS@\28\0D'6n)PfAzVM@"O63b t*ER"M.^e^mI={|p\ Yf숧Ûӿї)MT%tzhT:t1R#>~}^R<Dz\ZOc32$IG1raA{VF@t|(pk|J9) \ϮIrF )A̻pʐC28t&pYL2NB1N8+h|r^] Q>iy=4*#u>$iU? Rk¦Fep*dvn2kE@RēuMQP&aaxޖF]ph)Qjsېrr(zg4L[θA(/:i#ΈpSPVoD״՜)zv76Egk]~;sc3;z(WZl q/9u2"PcxPPx2(Ш +rj'\©=4*S"h{i:2hsʸ+y?$7D@ A7D8(QFx4# ZMdz.OFepejfmGKdyG?uX$1!B%OF0RE z(1FMqBȹQ8W\LznʤsUB3YFG-4Uë[PΨזC28L]u:nyk.3{ ܑE*{o3f`ۻUh]4`yB:,7V!R]xYo*u7Ѹ|.j)Ш N0#`V65 >>hQ]!wy=R8#gFepGYwZc,n-lwiZ `)`ߤ?|Xilay wxw'9tƞ)Uto7ZDpi/_?æM}!b`C-6 gP t_ߦ#:0oa^Mj/.O?/Ub<*?qC}t}L᣷s-lKnG'4畝o~>C1S[/}(¢{Mta ~7_!v^!v^O}p*@R9SOo`amVOc~g'9,Vao`g\- +athh<Rj,=[Css`MH@`^sa`hŢO0?8O_w>PcZkbLHB=4(_ ػ6$W:kȺj6fGa -&A ei ͳKDA4T((Gv}YYyUUR8}gZL4qo3[Ɇë/e5.knpP֗ă<(xƌi!;: g# m cp7oU+a99o푔Ta˒7@\LOH\s eZbAj3ÔYEܶ&E+3WR3yqzäy-X0 A`9DMi@4{fg=/9PӴ/D2~I*ya i7NҞ2:}ĭGp|GAG$;&YjB `a%[Ypg)yOҾ僶-~pT?.-,A0{qGQn٩4'۔lҦ섎p/θ5`8p0:4U3dcm%>޺P3H)X-`~u>*Ŷ !h{V:ѢeDo\@gKǵrӛ=ƃ}mfտh\㏚PϮM,)( XvJڗ/u\{cQ.}#z+d1f! Hˬ:t5QDOӏO@X5_M-5l_撰)c 2{Edw)+-}Q3i |-kN}}6^ɋM;DCf t_霩f'V㝶M[*CH.Pm6bFbCy'uVuGVq`  ؔwg(/x]+uunx3s.ϛ{zҜܮlnwx;:t{l.\~wx>`E%FU؎e2۸'%߫; u7 |&q_ޭ]%O܍?jBzcWMr,J%#.ٮ?oOIƯ ,5ŻYt|zȏ|ox]xX?[_\);uCZ{)nwlOa]JOq/cˑbkIi_ &_ 9M DD≆XcpGȊ*OI>ГJ`)oPL}~}0у%~ Lvl5rC0r1<ޤ/_lZD^Ng J'[|ׁt@>mx VG5/YMqv1]en71dc P6ӳ|qaz #,DKkqɋO]ϞeW3UI}qa;4nu4CKj=@0Wv؁k@s'l;.ʓ;%vl4/' _wiES}XO6 36=Az{U2rͽv>ڣKFAH3'RDQDkWq7\=wQJKu/l<۳d<|3Cz^9OgyE0o06@=ͷQ3{6xYz2+^S*C`:\Lg2@CpeLOm`;RriPޒN}<ݶV<֩Jd*/˱͉3zrfO~]^Qc,Uq?t67j2?5֯9wU;]M!K_1:໊l1zF$i칺K,t VF.T "mbRVM~`Rub%b5 o'pw* ɐ9l3QBƙnJqtpooRypF?(_Eʩ'%~%o](_Eq3>'>FK6xaѫ<56K3<0PI9f_hSY ÇM[}?_5tdL_6AQV݂c Nغ6wGqc`y x+ ew7VGP"8*GlrOlPU97̒Ҷ V +m#Ye`H^>/{v]/쮳SbLZ%S=$#Lj3aY* x,ʧ?qX֟USh yffWE[NiWQYg6;NSl[w~L3t}[}ƹU_w&0B^I#|E]mGu XUfnE5;lFy-{גܥHZ4az~[^OE}? `+8aZ?,HoS/~R ^nv k)vsǀShχr42ԌŪxyk>}Qw 6*^8 r2ϊU\1o&j^C!Ҝ䧒*4"R~'P֕Oh קw_Ӆ/ b5*Cz ٔXb1Vj<%1oTT7aҳ)qu-|S8 Pfho+ h@Oa0H['Sd>'3!6Lr?]ǀ]~Λ_ǓO:AB+\t &Y\+ [sy;Z{99W 5X> xח.睙W)W*NסX rwZݘǣ̦WyqWbx~N? b]Hw7TazmF][Jmhi` f3QSǬ$dcݓ,jN,dlvY NO Z2}j "5uVO=>s72fN)%ٰ(5KeװˑgY,X_;&6< ֤VCFI)R+ έs]muJ?4si5WFGGoÁceX/L|tw5m^]IΧCb؟T$}o?]HO9DŽRn&x9)Y "8rbrX+/B"Io4[sypnJ2گA;OS5μ{Yl{'=9)6lN\B14E(F RP 5&J*Q(bF46ˀ&٠Z~¾QҳPI,lc>_%=s{4Cti!hm>mIhZw\յ.2:䚔r!4@b$O!%T8 yg`X%%\2N,s$rmrK#N<6G\_o@9Y;:qET/?ʟ<)sWn&s}겘:Zm^HͨV^?7oH\Q^/0 1r"@BT);x9-Y-o\/mG`C@!EpOρ=Zt g9 ##A]pcfvlNl0] a%pL}6*QDmEv3#>LX}[Q|ua'O ,$9@Vid3g8 s):*He(U7yKSI+1&$D+T4hiB$ Lc,-g!)fיfli ko [hzv OW&MajB=E͞2@y&B.uګ`tݦ4no:*--n)V -Ovic ^"%@1=Rs E(dU$^nvE 3s^Nds爰Nc@j$#Vc ɀÃQ*E D{τN̠h$0rɘ"B"DD9*X:#zY)] CB<7h*k7)RLY`F1(lCZ}UK~LZbnN?GV<`uAK,-ȉTGN:#عZP3fNJyޏ!#`N0'W̟qyEp@ {H aY"ziLLIX*(l$@s1w˖MG"7γѦ콞[]pmZ9`1"\iM`IQ Mu S b")k[wjI .u&"*XBGg\Gc&+k 4BcY+"CJ3OѝzT`,w/E;:Ct[t^icuftjk1^7G|*nصUT]3jKn6:.T HC&^q p=; @Qha9 ;LPlM} @Q(E(152``si\s ,u Cи HH$|P.7Hu,RbI8F.Y4Rʃ k R5kT9Q1X7{=9;–_s6 淚 TN Fj/I"JL::\*bYūߖ"#X@4:&(fE6J (B' L(>( { ßM'Q7{ qÝk? ]XF2cGMz ^F/cK}mMM*&FIE N#^[o`0-++ҽ+ޢSLW&DeXə4Ay#11G _Z*ra5j"A[΁3 V߉#:gq ɟ :;tC5SЊ2_9#Cg{2̾{E6Y^3o|HW#1NP(w"@/?gU ͎٩HZ pkU(Ev*Rx.FfY_*q|N.[8}C4{IvpQ?4vkq\o%nCPQsn/K)I5-Gf%M&"1L]<˯6v{NWVV+ziQݚ- J࣬g[^9-(VBEvy6kWRZȟ܊sLkZVUrBΕ-Mrrfd!X̵Q*g er+gI1>D<=FlJaPN#68R$l;HysQ b}ڿnJ)ٯWN }p%!KHmHHFݚڤt]U z7q@|K]M \9Bx˱΋!>tTյ^fl;8G,zܔݐJ.}%  ߹\PI ?(vtpp1/O-uץPѣ>]Foȝ[i%J*:[qs'Ü9Q [ca"!tZSPsL(QmGs)ۜ,A[R.k%%RHDc_@-Z'gMn]rApN h}F@[~i!}Ar}A\_P/({PO2,@5j`@Ojyyy+xeV|^^sAe7-`<[>W{r|U.z|A,+ǼܝޱͧdQ}+:^6^-#cϮ4'$//iEtr 2;/o߼j[/{gY埽&ȪZ4/-D!*1঎( ȵ1'Hk#bJ8F/f 3yŒ}agvɍ]38>3€a,`XvvT-o0u)ݕ*eJJ1AMJbfF0'8p3-Č 1c!f,Č3bBXKT 1c!f,ČX[!f,Č3bBXKI_b.^jkǗi\o2=0_rKu菓InnQԞfG0|54YЇ:x3qJp)QJk+XDrTJ6uRՎ{~6 [Sw/M΃tA9>u<鷞s3ulEo|ژ fB#yQH(+J BHt?jܫj醍cJy=èz|p{d'rqs^mfh`:l_.}ȼc_7 %|2Xmܦ/ZJ\7L_k0K_t<;=auw"ՁT0/Qp<ϴb,|хg vG47;E.gt[er^K:*2oj;bgYPek71zolD D(AgI9oƮ~Ⱥeٯ4]z5 $>=62oBV3_]]{[n+2g[rER32[EaR%$8xrO:[Z |ݶ+j)*rnuTĤvQ T1FMBA6R'R{i[>x.0F'ㅳ`q.#.LMrA!A}Π9|`u5Cݶ&~ߡ :IVQ ]bb@a"*Hh#`sfap)wl]1|i?3Sš۞*̧I\axZkjVŠFHo@S"EClZ(vᎵzB3+W`d-?h2:1BHpYAvI VqR;X@'a!&ji.j/st\ye-gODLYnyng%2~ф;['S Q .,Wה<]ۂOOө@0eDZ-襽:~^}@e w]+m; LܿK2Cƻx[c@?}w!֛VBWA.Ou`}c1?d-O*6>*_GW䶘_K8m%Gti]i͖Vrjm JD+Ǚ%L_PVh,FpDjiZ$#A'UPFs{$KKH\r`"j֜I-$gJZfAp1xΒzhks8khx]g޼:#db#\j2 jdXg[ӹac_-^Z9sX-2ӚGxEaȕ mxgF foS ]ʁuY UذbW›-7-74o8*J$xZ̐<^1M~k!IF-V!Ŋ!/QL)i;-[+n8.PRRƹIR\g*O5Ĝ/JXeyCFyW~OJΎľRJB_ VF{@%'݁JFhdg\fa뉢3BHo +EU%N*I,8B;#CIys%&_hۏB@,L0ǃRL DL &ႩDQ -H! Xi`)z÷7_~ǟ'Ig@}|~g)UyN8[mLqoL5*J1ȅܱ5Ex b&BĶ1ŷ1=6jԳ^;x%(PP5 )l^{!O980J:R;fJ9RtBҶhr&U  2?3“#xGqv#żj!Κs뼵$֏_/h1T:df,.=[l m(z! ٳN{wDɻzrA $.mcN{?#546nːZw(@a{#b-;ݲ"y=8kخ)@CS7^7="uS|c7~s6}Q\h|].QGGG-WoؓbʖoAl@O+8047M{&;%CSVy.4*""rú[g"Y _?Z/pxIYҮ8(Ag#漏lYZvWDo9ơ>yRF(W蕠ON@ZRSezߵczz ͧ#]*{ע2\JQA.x뜡BD Dy &g F:S+uEtaC9Mԛ>|]3b8hALBd| 8-i+D>" $1~_-?hyO?k/vbxD2[JC5ГI.8H0眶E3!i(K{-E,sA H: O*25 C}$d tԅA8'=vK:l)r0Z2l2$f?dHЯ$4Ըe.:LJ3[o1ڿ= G>[M qDDQk16, G ޻|cL[ץHC,4jMp g6eT!2i4  r-JkӰ0uKKAlJ"%( D{ Fh<iy͌ZT\ѭ:d)6K3/T%tB'C3ЁH0TG6dmB6@E8Ѐ)lWĨ*鈶`0nq02ct@JZaT?@S4J0qjPll,74G]ֺ?kyV0 r_Up-Or`s{{AeB{vY)Tu*t| 福g<= HdX9%y(Y_tp_x}ޢz\:s^ 8g C}uY=$:1>9:N?;ˋ~in%l<>=h9ᑗ> P;Ǜ?M|+9ݽ=z{8HmvӐ7=DzV? MKf)-jPBRV ve)T+jRJZ))jPBbkPt$t`))9 䔂RS rJAN))9 䔂RS"\憋enN0nl [hw-͍H0|ݍWGaa;6xJHvOO()|aZTD/蹈Çb-[lo{aW0:V8徼 9;^Кo;7oYc<9Б:k֔z.CH+  h#r>i96G \ B-ͤ90T$$>D &j\M-!DnQ3BKt޿ɏ䏏[ý5dN{v^@%?=M]`D p/ʖoZ@aRGFtm0yY_'=q`*~)/.$͌sa=76p*)cAxFZ:_Z(A #>XD; D锨CChSuGށ80nNz ^_?ؔ57?lٖ[h N6J3s2>y#)|9s}ԖȎ b6#e TGIR6YlF;w F*rBmUDNNr噼i񦹼Fx)y>EKod TDոPSL1.5.'ZZFb<.Ql:Lf &/c_7s'4MA؏Lg,j՞;V5ͷ]ן~KNKD r&.VZ\~|~aĞ6?3qLgM7Կ~<'8Aʏ?wÏmٛ^{#g4kgE"~]67NOɁx'\ f `۔hF|]n^j3s1w.>p3d ≦1L@*FS o;RwPvr* Uj_R έS6toSBUڑJ:eDS^jhAڜ)ԑh\bڱR8Rq d:*#Zj:}y=kH5/3_Tmj'TmcŃV_(+W\ km(%3:*YJIfNMRӉńW-&s`$U@I5Fs*j ^ErC *< ?ʙ%r8PdH5M0Q 8$кIǨ.rb /,gŦrY˨˗>꾈#c!CA(H#9Ѡ&;fsE@@5d4H*%EvF>}dO:EAQ- JQe$Wު>jZ~:vRX.?O|w41X{D"oM)h[GES>z$Iޕ2$Noov6ֆ ?O<'7zE_ߴkb_6jZ{SaTB*&r p-Y* Oa2h]Gv;M>*DBONE[>:-#3)aBS(H97p4G|. f侊G9XESbV`}NIVƏ4G=0-ȱx0G mo9kyg ̈hl|j@h'e)4˩5j>[KZB9pT!GAIA7 h(#(* \.cMOJۃ<ޢy]J*NqD2O; JP,9$;E۹[~$<<)%x4SURֆF3hQ"K 4<^+]q<~{s]Xa fh\ JY\ZI R{da\8z]wi@YI1C+ke4h+! 51T lp$E ƲG& Q 0g(o%Ӕ#bK !*Lp %.N;>B$AZ: *4ɣH"*HMKD$at5'Q'I|[\gs=9cFhu!.%qe%1VEm'AQqj1]./1ljv*W#r9TuAMQhiBYu j|@I'Y'7ޑEű N.a>bޢ.*/>ڒdD#_p&cEGGBG*EkP;*<'gm4vz}fʇFw⟋7T$ R9XV[XI{s}zLx-+3g\4;{\w x̧\by oM y5ʎd)WyQ[C:[?Z M͗ MnMn,7HX6B2P~-8gmnbo{x{祀]*']v~z꟮*7<T1ےD;MsN]9C"e`T<"'d4U|qq]H.Mb $U\2 &I2>94 2O},@Ucg;ƿkbM;E/GN/[3yj#eh2gՃx0'&6RBrR8e([[*$AV R6tm)]g5Eں7[?s^ F5Os{2c}K%8 A1THI L#xKO ºT-$, wZP@ʔBYϣBa x"^E! օ$lEFXU{G2f4%D*@pjT,eAry QFI{ gC [㴬V%J>%%lBGB[׵ in*Ȝ[wxwd_K`oho/}?YuE. +u4+H9w X#1Ʀ$}s%#-srImr3u{ˇ[e?Jx\Jy_?%?boRv=^#x)Q#fOב//pMz}/faL~jaqm0 -k utېܶh=Wi i\ycoHQ.V^m~sVLhh"uqfI S*Qje!&mԎ% Bd<$5#6|o%'&lQ qC 4d438 gRXfAp㨀=C-RFwY,6燬g>j¾|e/K"yV\4V )MW k#^Q(iZ$r $*EwIӴ0ք,dCd U}dެվ9})opr$u0* X1H5-:i@I8bD$p0~xN6ǛVlOJoKy@ cJsIk$I< L9 7@2I Aznl $ r9)Iψv|b)aJ*SiXl:1bg;)Ev ^(6iꞃ#%Np M |>*<dJV+`PjB VHkH rB =>]_l1`+݅G?gq`өJf2H'x.6W?˰mr^ЖFuuUXDnH2`MИ<9sLrm;O z'qfD8]p7ʏSQϚJzh}϶ɓ,Lo{ΔTybAgM Ȍ>H0YDGgh_ۛ-Ϛ&X`>oY%&̪3uSEj]cZ[|Ze0l&#_qH?v؟O>kCNf55i{KaZ< {Δ0rW5Ds8vdYs9?&#|^.K1UrQeڟ'yΚ>F؛7<Bɫrt2\iOvsc>n6mt[V $᝖8,烟1UH^kZ& jmZ& jmZ&xp jmZ& jmZ& *,1%8ļ UjC`h8 s$k$%OŚw5m*YP%lfLv#OLiDsShEIBdD[B7,w= oUlNzՉ "|w]nn9n><^FAba@eHYr1,\X-P&йlZ_C}Z_uRJn}ܿh,fûIm γع/{ɊB~~*{d9֒m$8LVf8qDbJq֭k̅ }0͞>X|쬋m*wÔe'2ʐ)e}=ëV4JF;AtPYC :Ӊ7[b{)~dzb]Q}Ɠ5Qx5ٞ(>(y0{aC^9kiy뙙٘&%VY Cܦ=pcp[,Fk29rKg ]9]b"8ɥN XpMmH*/D!'6B*\{1 C 9Tc-~LshuOo+{s9m4crxYrAJ* ψ93)rǹ UQCCׁzQyK>wtqzyXz]_AzX裋Ie] ^C4F-2YM`u)`auiq@4)A&C`]&]K Ǭ" lqUF^X>,rƴ"I4 Y. |!餹 C :+Hg8؀Ӳ!!!mL]4FEiDS"dD.y[;|fg wXyA `چ{; _5Iof4^l-,dQ_K2e5ވ 1b2-}w_dFi2/?-rm[@y?_WSIXVtboi,'P[]e}v9fm-ki9K܌2) d {%Xin{kUv64УUwKDpwCd6`6`uCN wœca,?WL"}v*שWڴywp[wѶ5ָM/X+.V3Z6\mۆUx8ʒ, wW\ǪpA;fQ-s.,m)Em>g7] ΄7륕2o@40@P{{Owb /QE=^;RMZvR5e]:Q4u,ZbHr[(9P.Eێx8p1Ú,Z}Ug/' !_F o^E$r\( >Cos"uh) :&uo>B"dTt]]zB}_<}6mbT{^jÛw;h+n]!Cm8Dݠ ^P|uwlכ5/Z~&Ҫ;wCI= %`ze{^ި:.[bh^uOۍCw_!ZtYOdKm8T7b$eW;n - ЁNO-1Wϻ);?ϧQ}vrΆ}7GV ,%GYlq4Wl&3҇`JKnTQ=:V-Xgi҂z7Ķo3O)cYWf^۷Y X7+Z?ݓЋ͓Mχ;a& qx7Xm+AE/<(#5uDfʷL1<3o h'@&`KTxY G^ug-_K.Gu~zk z~еo}GeA3#Pe=8f m?Wv2(Ÿb[t3?;O^ԖZ $}Q=p{w)P[3.zBq[ԉqdzɀ6jD:Z幉I87 9Ox.#kjtV۞ۢ(ƩFZzUr*!K u l@)֡׺ \|;JkGUZ+Igxx\YYlŜEBp3: LV4RiգUz\MϩE6vW|50-Ztt^U#) Ns%0[/}Ƚ4fD罷$_Fh&7\7yRۛEgl:at(98hAԢI<9BRТBiwk(6u>V99XhhB©)Ĝbr/ch S ٜ*W9rWn?Zk2a=P@㜎Gt9HXKd(E/mH{_'Ĥu2 ;ÙX%; ݼ)^S}Xi"$w!vLu>\Xʬ17ũ޴7;.dmsMt"D2%Ԥy8{g5y2f]'rI/_n0|q±)7!>1F~}Ƿ|{ccGaGIcGe~^s\wWA"+-)teNBfqB{䙇ŬF 9jF 9/)t:&&`%!go@Yr9ADf*E 9RG#JJ-rl >}`8m`)j+Sj9".sb8fxvEgoX*.?a_<]zbK9YpudxjU},ȏ"e(L :0QvtLKZ2Eޱ~N"M(5FI╎ \9# &P3KVq=* 4slMh Qr'{Id}BoBrnOHF`*m~FYs -UtHL3|c𜋉^U^mo(wksY'K2gc!R9"Y\"d e ՐJMpv pBΑr1 gA ˜P%ʨseTge'B[IG0^hD.{LjnV KXA}YYgպs /O-gJS1Wd\I/i%C@,/\I  RCEjB9?A ?cTUǙ  Ȗc8 ll:LP\ZըVnvR6`5s< Wb- (r.2K@"瀆 IB Yؼ+u }&$}3C4 0D\C &Z6I%DhYP10[ m<"Էrh. ȱx쀃4(AsBM)ѿ jk= 6AQd 7?$Xta^kpW~-薮[DDZɩM, &b"'~3Y7=ap4$Obz,IF.@pfX, Yd"y%ٙb[-KՒl,) Ůb=EƣxP迯_/ƿ%CV,8'ULx??jظCOMdH꼑gL!jȄV p"8CdCXAh-"+1GAC(B4:jV=?ODvI~hL'pȖ$u}]VpDT]Bk5k]~iC,<HIJXISq~f.Ѕikn,!}VV&*"/%_~m5MN_)EwIqAD/?ǶDK~h~>ãB.Їuq2m 8M-ۼ׃zkG wty^[i&}9l`®Zj^5~. c5N{۪V‰¶#Y]WI"{EgKߡZ0 B\B?{[fy7xSobd A1~RNPIE)M"GqFLx=y]);NFYLH2 /ͽ+swo~~r{&Ml}en%Njw|.inKoHTx0*}E#H˽MRfh:9M;u`JʴL.jGFHRџ ?]gI5QIjs-SGh%<(HS;:vG]ѱ@6Vk|SyߎàBK2E& Zш}Ū$Gw%sbRȋf(3ǩʼn[@@8HU>D'ZE8H1| x0eYTQ;/g8NCd# nX#k xM:%]"}.qcW.2K eȵ`Ai11',eYZ/<vSy ;Oa9 ( QFvSy ;Oa)<vsxUvSy ;Oa)N ;OS쬰vSy ;Oa)0ēfES(D<Zs D,p%܌I%,oܘ^P 坦rrP"A8ɒ@#!'@,4MxiB!F!DO ;`ʥf$FRtTX$)/IG_I~s~/gsõh:gl߱&zIV](H-ʼ7) TNB)7Dyo:j'uZ 999$IJ@]@,2#@`q&zʼn:Pn]-}QI$%0`$PO-N;!15EX8ƉnWg诀'9B8sU&~A!yT%SCQ q &#P;z^fIBv\4gP`TW?5=F5F"-E[GVZ$enGoFKjLx/v/,_Ԣz;T יyIĩ]-L%ʬbN QKKQo̗v~vJеU~7-Z_Lw?~Aw3f/v6I!wήmZU~mT< MŃ2yUcȅ3y%&>RL`&S-#`2(R 8ǜOYw a)^p ƶ@)(\T-({JbTf*XvB(K?N!8I>Z&_Պ*WEU|dGUDia/k?Hq['o/y QgsS4ME* X+ Ԃ֜I4%Δt Sg n/n mGwSaj?:dnsoiadcޭ/PՋiӱ`0a0"bvuqd*FFF:{v+5'Ծa 6 {}q٦FkN:,wJx6m0κYHS~/1!$OXMob5b5WԜs_ gUm3^v.A`o̻ߛL-9SI_ޙ)sCZv9n  u4q`Sւ"8Q`%7 Qr`&s`|{IOChx8tjbUOY?^/a)%i=-?| 'o\mp1ζ #!- CZ^ƫVLRpkCEtZQ4,d^VXWuZ^^*}Eh$N0"z|87!@ PBΉ3\Vq{r?9/7źw?z?vwkz"2)iX@kGT΋$CJW4X,ȀDIo/ۢvg}[j'_w7Pf-͑/þY^t\q=D82`Q6ZH+ky`@ht6P䄞8 9t ?k,1Ywh- $kG9QTz I;EY!h$/cZ%`DG `f^Ӗk녥(I1DҀ%"03m8{ox8_ d$G >)4Uo'!vkofĻc~mGA=6Z, P H ^[!%NdH#Xߚ 3xeT4EVzΞ$!Hӱh*dDilZQACƪm 93je1K2; Q+څPu%kЁH㠒 7L$D|bɹ&.Iy+wiv(F=2 NƤ =XdyY.y/j@JZE xM*HD`PlllC^* Zy]r4R m-Ae9Hˁ\zD}(pҹZ&k՞jV #J zJBC)|*+=+ïRzhGB nϼ]ܺ]x֡vfB8o;łI!{؏ڡSvvm}q\2WZnׇxݼ{B}`m;:?GI #Yy-j.tߠ4D"g\ 3+g%?RnG;%3)PB fz-hI5sRdL8Y\==c=I)2*|$ )&L`V"Ҙ"6q ipnKv&χn)Zh4nV#81Ys]3+UlWTOylW;;a5PR}с7gF# +$YGFʨAjgXm۶f6g-"dB.Ѹ }.z!RVeRڨ12 F5[ 0:tHIA)/# 9zE:{e/ߨ՗ &B1%& k=?2#ըRZ-)r]A#$ &<<쳴(NI Ƣ^Zddd%c2%g7 B gVB2_PrΘAql8GlVz{2e6\CiJ0RC~ DJQ%C2.lcx!߇$ (#8, A H[ɖAkH.QDl 6C)d0RfT+-4I ;)YVRN+m"/^6zm L< >8#jAώw)a|v 0SɕJɒG4 `<ɬN Űx."c6} ȩ'7;w,b/LM I4E]os߈)FQZ#y$品4£<Cy!#Aհ0~˗tKwXqMHMlbkU wtF (ه|PkѹhL$$I>aL@i6O@`3s dXۈQjMN6%3Ď25EYmmC$i6RU~؅ۛ==hZ#Ģ\F%ɜ?zٸ°b*#D+!5oן/y$vϳVCIh=oNbކN0ـ`ڡt$^uyɳ:ɺ ^Lj?Y~!) :9KZhp&!!"0RԵ!P ;ƣ_GV~ѡC | dC0d$`0{()ɜ%R(/HbW`VP|Rxf eF$ ZdY t$#.lQ7 %>, ;?B#havvT&00!z+RGmQwT~D3x'GWQ\߹3 ohXWlZ1~ Vu7B\Fѡ5Mɝ"XπHcݛXfuo@m6e"kȧ ZcJ!ed.'PY9$Q-Myڑ\y8y"k~ps_g".¶Ek?/i,Zs5,ֿ+<&#żԢv)淐ٴ]MPLӪ.ڱuD#<#~a]4IdP)'J#y+N)evmBAp E]0.Rz@eTި*Y:& B1l6#Rw7k@-f>aWޮR훡ǵu~e}]N牾bG Tb1`C=%D) ֭B&TٔWB&yKt1x{vd3`p5\%fKպLN/PIޗf9<8e[g~8f5zx=>ʣ|Ư=+yujYO.ڡ .&Vnk]*Fx M(Λ:a]К]JPfYڬh'#"ZȷZy6= ٶD>X]-~==~\/'BŽK_Ѣse y=MoyN׿]󪂮pɺ:H)%b"ow^C$ʼi'쿬h˓U_ɳ-&oh1U9yXrI$ϐL$Bk-|U{`@Q"fR)ڶrFaAQ|ʺ8࿵QG12nb-Zk5#DWǪɝWU,N͖_tD5-}} wgT>jLv7s2]g QQ8X?;* kjs"KGE!+d PX]GaDٖ5~SAq}r:ek6K'}A{; T1J@ T `l2 Ul / }Foآk;BXƆD&&H&öABI$LIC 16ba;)5k鋱Ȣ*HbrM6Bv)1Uh!XIń-0b6"mD9vP;5eGhTQvjbdqx}LR"Jhlq}zG7O r,֋ ^t(Dr<1Bqh@;X|w=/^p]NSoHnz{sSⓤ8X?gBfLZ>--@ɛLx'ygq^zӄy+>I6,|r^~Юe}m^Ve:_X#(. g;F|`LdacW SMAȤ͓xx|a m*7MM_rrCrOBࣦۇi^W<^q@RUI ~.{蟙K3ac>vƜ\}|lq9ߓnHu@ NMt:?}{~o=j]7%5#ӺӲ i=z)+6 [h۝utT_3ѫNܓv+ЫRߓoCAQ.ɅV{k!b!ZBKJ,.4X'oѝZ]{yPοW~_}%׿:DTbXt|Ē+ D&ݰeyf%[G,{ߎ͕gQIff#s#$vb H|9g9"'bpI7cIOK3zw% ][l H=T%d1*Q6{4Զ";dI$ahg#zHB9;'L ͽƒ?CE 3Rc{ 3TJD`H$&hR)8.c5 f9ZY5Oy#USqcfwtX]竬0ȭCsѕ&l? ~\Lo{eԺ[g̡/C'5doIv9,{ntK˻sn4|߃{A}$M;:&|N@H5-5_6wD$o'sh}]<7?o>41v3xK%8SK-9+R:ּ&=a" k|6|?/Hg) Չ UD_GlҌ.aV?(Rd ċ ,:@wʏz3灃g_S1Q 3TGHvi56Ą< =$4$>?JqTQ( 3Z)"VB"s 6ZlwNiě\g:?fߢ}#`KX!Fp>'>cΏ}̧I[^j[Ƣc\.10 .fdrH8+ЫJ'.>X)] QR1q 1b"j$~bʓD)ǽͦ26xׯɸc~lr5dkop;Ӈ]tX``rǏIB{k@{5^+Sܯ~T<ߕWQKcZ{W .fޖF\z6;BV<,UrO5GsU_}p`U BkK fXIjh}sӻA!޾]R]Ū|4}`sxI@z"pF⠵RXkXzSx+^o,TN&eyxj iY)ej^jAwQFE2U4F~ ¼Mxwq4nY%UxEşWC c,/]M@ Ϳ*Mxd*~ f ҥmq~o&ǴRKx2lE[0; op`͊]G}1ݎ4OyaRuZHP ;o'4E5^'uSL/-nzB}SM25]y}v8SֈWJg+6\mҙJJBnW?O.ަ$x!>oY F( O )9iWalH98,#v)La[vcCNrNzEosN { "^>zʎwl^ V ֫$~ xކ!}[+UnCVK'Jgvzt, #iYL}/<{]=d'5mp=ܚYڬN_eӥmf \U%G_9 "K) Ɖ K3ݎrPt :v >𢡄٫F6C (3œ Ul/3PW8Ǵ=E&aXn=LG=|ڞ)/yEg;U\J[Qk;t&n*b|ŁpzXWl`x4 D6 TcW)gF >߁TD(+U$nU_@B1r S1s]NHByɯ01Z>=3*5)0\LXB-=nݩbnZy'AW-۵}__"(ߊ:j0͢ "\-j$|7xAPs43:Ӈ)K03PσlşqP_ӊX}bk|mIODRaXt⋿?^en54P4R;'jae8f)Z#/rXh,QQGPy#^@$܌):X?5:J>l7Frr@2ޮS4aׁf\}{^N~H8SO_w߼G6@U7#X|>HɁ$}oBbD59~U~g%RQ2?$SPK 5#∠)+,hJBT4UA b%b$Rv |FdQfƬFFcn.i/&ZyЇ\dx@w!X`T9 JE"`"`fMH0<)c"]}#5 R.vn~]nNX^s9սsye rsr%T8 yKpi4^JJ8%Rb%0l!7. Lm`Mݜ ,uQ"MݲyIM;3/_\\V{ 9 5z>` BB14^~l:Xav)5gܰydw&?cmdi QΧCX Z6maK+XMB+|y@7{ˈsjwh6qt}#bmfCjB1<jZfACΠ'M)*e^#y[^4 .;yRXA@'}5[8:BѰP|p<ʹqao #" ¢D,zI"V(')` YeUxN<~ .7wr;o\z*uIcWs)CBM(r?7Iq qO{'Go*=;&z##'|yg>T:y]BXv1Uօ  <P}aL&Գ:?l.t xlOH*@~C@J ³u7E#l\zm~|L?M:VB\zϥ_-XOƺXۊZ`^pVk[ iihHѢ\oZ~B䌂.dRbJW`hi}^HAkfm!S&(x*Q e HJRFp}sH!?l(8KڮB/V"j߬[ +QC/k5'~/rQpEH+%6h#p ]JsWRh T0 YAoAvMuحco>;G({͉Z31d_}/&jR RJ&*-$q@S/HZ,:T-yjK~ @pRx0Wl8BD`'0nӔ֕yƜfHe̊)Pʍ"VFudXDcԎZ ?D h̍ g7F)]Z[CZڳ%y^2/>/.&g־fu@`j>< $H!$4=JA3ή׮{>N'U~:!qJPSJK,SLZ><~ⳀT #DhX0Qiʹ&R!1C `˸*/Gt;s>2`j G"j1BOJ0DTcal04*kc582ΤZ6`#*0C&gn ވ^9NÃ|w5xo)~Yȳ$JMKnϒ/@Yɸ楍p0aYgwgw5MSY^C! Ϭ*RVLjDdIZS9>/DQ{0rHRIpKj4vCP q쀜)&8T675ywTv: LVf48tS_M\B兺_xW1א麗f1[K5xYRJ' 15ufFv!a )jZ(K)µh8z<,+نs)$|$%CX HHtR`J C$kmH6mR}H<-AMJ&9YË /EZ>$G<}}+B T.Ma_&X.ҐYJl ,Iևȍ%oAG2,UVLf)ݩ@GU'by;)dS@l-ED5zWlή@@¢c'cW%C׾iw܄;p3hpc)G,2^\WݧdFyyn$I@ $ *DE -Ut(tT0¸_U2bګGx- 2ĠAň (WejVT#!l\="5gOTHHO|N7 /C{&3̖nڭϛ/U/v/-M#6m(NLa?6{IJy 8wzۆ^7 _ּVr!I%U #?q^O?~_)1/1%I 3$es#zɇ(T_CH'P%eG팤F1c D.Jh!9ϫizkn\[CO麖/ld^j[Z'tlge_>\Q&hRNJh)TXS&J!tkM2QU;}q<AYv1MRAGT^9XϒլcF3B5dWLHV2^P*]v^ u(t7dlj(b1g5gO=fNζ *?>FE[- JeM4USx/=# bE >&LbuѥF[LT)IVoP+]_v`'=#GzW1RUS#6hL͠L5Te7(Rax#&T@u2ޕ~Y8}ⲱ8g['ʝ{'oݡ3^}!n6XL{WZ9(gX#MfcO f[44V  Ap>^DNEz*YBI+YeNmQsLѡC25‰5&gҠ (NA;zjY;8kk\DtEV};A+>=OHYMZ6v{`Y0@X+lcla{ޣ<=OB1hl2RPUe=U$d򁌖jWt0V#7g7Niv:qDy 23u wWd#M٩1|q@GGBG΀ nlG@oI$ÇzA:WK$AKh4%RC"gC%R'HѐK  sTkDf-?bs"b%YnmѲ S@쪊L3 Z-Kv {kq.2!%(Z'3mL[~NO?6!fm3ǰ f7Ԁk^ƺFmQ,LTj^GHڷּFwhf .7(~|k n~tQ78eQhF'D9ѸQV^Ze>^h4.Fq1n7j)#wNZDMS'.hvq>_ cbʥv5Z^;Q][+ټ΅F󃔷'A|ţh\k[?4Z8worwgUSWy-v3!O?+I6{ Y&ym5X<L~_ގmud]N~|veyj^{޾^yE1(HIt% 1t^ݫ_r6DI &vL@$1$(Rޙ)ҵhq@.T_OBc?14FK0.KkO`LJ&2q>CD=r90+-ӆpmԭx:`57 Ŧ.N5:$@ t1 ^jf{sUć{o s-l6KI-rW]f)P)|XTRHJyt.W|aE#N{ }q,=uH K:ХXi͆{FBFVf@E+b%% Rb3:m_p_I^1P>Hl̓SƔ</іB46[.r$+f%Ʒ$| y4#JYɚK A2F i9X*Y_5gk9\'zIE~*B5 c4MQ)lD(XQU.ry r:R` $Q: R$nkf.mdJř'?PƿRd}ܨJGBd, g?QeTmV"󟳕֫SD%[& h̾rϬYW+":вh< UIՀk^7 @[;Sm O%hwwmu>}JLHتlk'wy ;~ b:[|bQb޿s#)?\ȭMb)5 VEfO mQåJypg;*]ٝhahvrI%BZvޫg{ԼVr<\M&[[>ny~FM$g*`XzeE5vy's_@g^T_H?">њ!x;}=|WFίߏ>;b!oGsl/ˊXctdx}|,MEQ wge-aeXF-2]s}?mhtmnw㛦~>7/ƓO Ez(g'i =Xc֍M,K>W_ 8e@j |'j[6z0`V:@#A`,S*O]+c :#d"^%)?(RoOY~~4+ڂdZ!h2*T JgI.W?Vic,%Trh3^m20KSu~ޚwz]q1cɜVxfVҟ{_F&8@@C0lrYKl:6Ypg=![ KJU@((:&ї0N(X_c.} .&*P)0j̐ tB5')g bDF{Φԕ[vvdT\a[wpPHPOK) q@>qH8w8| 8|OZmֈ«i~d -و$ @U)H\')O?{Wȍ0p * ܇$[`,pɗ_=Q$$;_Ւeڒeʒ$Md=ŧŪ ũ\JKA{ccFI %pc6ڒը6'u6Y/hHfiTD8{5r)C##(;cЉ΂1P@T9y3Dz-@(8 rKD.{&ndMJ(0g2Ϊu@+Y+n_LObI+ @0K@ ̠+8@Aː5" jB!>@OYx4 Bڢ'Pk΂*g1۬ :8JV5Ṇ;]A•2. #rȀe~hR@ u!bqQ~ Nu4Ed^Oq]G zNTMԽM3Ciq1\Y+t6(#2˥̔{gk1yDo7D?nKť >^|(󙵴/5:R{e>XaG=xl&%Q-:Aoea nҟ-4mM__ՈHu@lr`)%l]vgd]Cj>7(%kW n@ֿ@q $XjK+U 69f eb.1RgIR*Hvˍ6$+ٳ=rHRl=,L`JN[ILA1K x?ݲ.dy2ϓZB7a&kCkt)xr@hT]D瞊U=yJңj;;XiB`K&3Z;D@gs\Z93dr r!ka=PP휊G& 9hűȜQlF@zq M~607&%Z ;͙KPAND 1/?cL^LLɢO(!P.rjKI8ZBD #Wq ~қΙw"T}'N7My[aW'[eJq$d jUԈR`a{3\hP')F %89$J]HH\T ,2myRi枆 2>E\4֞NZv5/.S}1P3QLcPCu iJn%~p+ TA)&QM~jx^ޱ2Ɓ~~~ܹr R&ɨSP(aaJHElEkIDFtЂ{ZSB0F1e2Dm\X% $n"S2g~{D>DW'ۮ_v3_wv\!6fqզ[^N!b$ǔH1@;YJ0<vM>'VG#0jcN ]9 |q㵰 2 2 LwJ G/&x/@ J^a~D]Im@*"Y-mj~gӸe\^LJZq)Z[ Z{`5TUJH+6d2H Ă*qA#AaI!{=WGU {P֐]4!$YiQH>`0dB I9X̀iNW^=~Ӹ= mo{gb\N'g\^"`tـ71 1NEFAH% x=brD ܔ~Cgzbn94] KSOZ06?EOTik.Nu,Fnq߁WŢÎ[qR7pyILR f0G-27YRLLl.V`2d9oOU~L;&L%&=qI]t1Iꉹ`)/CP3ڃB&گ'%Քxk+KruъA+z :[q;NhCF1;ic—3J^!,;qy`)w&Ē8v:ZR@N+BL-!v4iV g24>aX=ͤz&ݾɗ9OE6EVCv,^Ĵ$ڜiݛ;6?+%0eL53dPYdk+QOZYꗯNĔqx&D-LAY\%Ud.M!0((#8]XTsl6z d6wuYŧ? 7!(&FB+R)KW&!RVVeRqhd[{ ﻷiF؛V5n{~;7o <[6r@_G lZA^}}+K#p /?h>[F9}F67\8NAE->}{[UֺY'22~n쏍|7-L`ߎ~Joȍlz< $.}re.j9U= ؁̏_ e[̍NPf4:wrauGj޺VVZ+6Zy>twNڇ;<뻣g7b>鯣zq!%NQMq.tU"tn==Y 'I!%+6GPN"s#K1j( RKAd<"3O UΩx4NaVklF@zq 4m%Iq01\%-Z(Ti, YC<(I$ I9c`Oe65-zĕ g蓥a0L\2;gڶq Hf" 2DݖgQbi>qTtel((JH$xֈbKF %eqf-@S;T{1Jp&걫#QH-CYqA`1xF +M(XlSt+O[7\flGZԡpiqOu v>/r4dt] qbest0~6z]9~sN)*Wn6C9 ulBqЙ*sX_kS{NK{MF0a9c*%zГ^LA"eCIhl } ʓD I4'"҆ $%2Y]Cf9. Ӑm<־'T_&^ǏSp[zZ `vtz%ꭳS|jQ H( j@EJ3Fv::W)I"z:$He0Zs() ]ҘKఔl0lnj r*E!k4:m5È1Q t3߱y=0UP6B$%3$PʬVRlb]A:鮹4wO3]niLJƄ**zd˒,:#Tlɪƥ(A;B gࢲNR1 my BƬE𺱞5Αz6>~g.w#a+底 4:j~Zb)E|)Y/X) ydU |;eWd<, Ç N;HQF\ .(SdСV3'8i;V;p<dQrv@"2(%cSّR XA}wL&`1iŔcp归rrAҠ O#aդZƓ_l,LDNU>X^rHY񅡩% VbTcT|ۜ3 :h>s&@'SWvƆ<h&!jj uVk U_oCŐFڿA:HTɚ&!()ѽ^Oz;z;΀u^[=Gz;z; Temz^3` )""-A lQ BAZ H+lR&r|4d0X[ &_dŘX?U-P,IJ$gr ]Zcf9vM۴пA+uOP^y}6^1X|ty% !1UdP P ?e%yC}\Svy;(fu&ِɂR(Jf z+I))8m5Y (@.Zb&5([VP9I$0"|cl:Gy! O=|jq:d}tm5_{k쮹U{ n9fcvªL%].sNGBL:jG"nٖ#T)c6{)f ke, IV3^Dk}WqoD>=<.˵_=z=)垸GW,=9 _]3Z4a28=~nAMHI P4)m=!^4~Pӿe l!z|8~Bݫ G^5p0r^TQRGJ!F/TEyW mAok]E;z?M >ʻZ}B%eB216 8y}Ҫ&`5E*OH)kJ[VS65ldrP< )]ÃP4SVsDE"vؑ1B2:d!2R#V$FJ"!0ږ /C)cTZUKy+E-,LV9lÔ6єJj%cڪ[`I|SK"4P m6Z ܂d7Q$!lm犱.FGȌ %p8^d[3KR +aDA5ǔM=\TjbǩQ7[n4JE[-}NjrJ5bB6[tƔ8Utp Z8l[]Ӻ^>A)Xv=D-`Guf-q|N$9ߜHm:l ^XOǢ΋@00A`%Tw|7:AO ߐ&ӧj&jˇV$,=?Z=~_nԫw_W~rW/lز7]oF7gt+ȶ`՚_66c-TGͪha.S?%rf]1b^O1"뮡;ʊՕJcc/c!znCzKm od#oU^ z!}nd嶶{Mo/M|w5a{y6IZ&h>j _yEgQٳ03»3Oa>6hN6F;1{ҵV'x)wkώZ:,HC/71tXY)Y$Y2T%su7^j1~΍!\?zF?q^=i/z~+WɎ_Dn=6NlyluV7m oXtR$d>ɸ;v?gd+GيE ٞ4,|gwuN YDX+.J=,Ǔ1%!P6ݔkCXjۘOOHiM Bp^;DRQx9@ t|sDYJ0OI ;Y6fu&>]r}8l؃륗[Ē!>Sړι# d 2K"Ÿ=#u0mk} 6s2,['FM`$e!/J1+2MJl  0 Sm> bBmDC0?!61ZbX"'UB"*WX#3 TJUB1$]TqdZ0{c14(\=dRM fg!_`U,HǞrk>q`oڪ{6v%0j~)ky{8;!&Ss`  8y \p;lr8iԖw?Y{OF!#(5yTE20%d. ڶA2F*C>0J)DW5&)dSQ1:R:Kk~uSV7 |J7}iߧAp^w\ (+% ;=L:?vB~?%'l?5@Cd?, ~( *  )''>f\y`Rk\>G(gbMb7 ue`τkYDg0}O]R$eZt sOۧY DT<nqYϏzȟeLj-,,~ ䷺]}'"b[6?UBgw2K7Ֆg뫙wHūk U 7ib*ȶ(?a3q@w˚W/]vrh䫲ڭ~s$im~l-)z(IkeCoh`uMVVPZ&v8]Hɛ^rɾЮ;O+hy6?[~kS="uWigƋժ.[Fڨ`;}~GS624-j^"2EB;>\UZї*Rcd J"7{#Z*7%)v-Bt|i"۷(rwkKJ;߭RxSU J6omҤ&DsڗBuؕr mqѹk3SD p{@%%DoB4nֽ<'JIIbE1HGoᶨc[%cK#)D$6)hK se_'Z)|0 46:}Sb>I'uRt0dA ]&|J'{XMI1!KutA&+(I׆R ӌT"yˌgh|z lEE@ +1ؽ*ؠdBZC:m\qLOJ$[PyU벫J|XZ4B,@8քBGWXLANZЬ2vި^\s ^-FW5*? VX4m:TZ{΂6cUhU%`}5 eoW@ԓ*e ,lG[ljU!ScfQ1ِMhe h\`؊s#Xd*zbJe5͐jPoB+~X2nPhSzk(b΂Gܤ a!6K17)_JN ufM0.Qx+9mL:v_ fUPЦ" E4GQFͤ5ƒ#ё2|_&!Jۡr{FjuO J H/[.fQ4f^n!G.j5$DieT"Z$Z( Per9! aUhޣG٣?|p.#iҸA4:*6hQK:V̺(N:*&6/ I;L')"_ )`V9?%-hW GĮ]-^ L}6=DfdḰK< <>*&{H\!iZUFB9L: Ρ&q3E ) >@M&jZ5`}ڄLkHtqiy{'e YLZіPQx $nGep6OEW'US U_"rΊf mZA_֊ANEU6`L9hP ]%@_>1&#TϺێ.)XRr*a3m>%tB;뒄 H'{A%DEWHPl5C펁5$BjFP,[v#iX<&6󆞝#t6ڻ*Hsn6,ϩBX[+n]l:gCz?Јb8 'k}8N kƽGh~N B-;HWN v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'u)8$';8N `@@ƒwezN L"@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; x@F- !h8VJ @6 @b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; x@%9Wq _f'ctA+N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'q}އۿdëfG?VSa{}\PZ]&^\q =1.W޸q1>zK[$6`Zy(tEh{tE(=c+U؉+{Іv3t\8 `-<uEh}tE(ezt{6z( h~т&9{?kUbQVϮ+f6f~q~v~n]FIkܞzZh5Of73s=MAhawGz+K󮚚-)9,~kuzD*lίe7\;ruA< OOzxk|yY(ɍ}'6}"oPkj(Yqu r8Eu_ w_%5{qb7dQ1<@nlZitxF\)]>~cZؑ_Hx*$srLK.IKQ2_ {aIA4UB=7<^X~=*Pcwx?v~ZNu?ρj(&~ KlO&euK+u׿-;'+2!'}N!htv5y^eH6@5.Z1HMF7Jdˑd5;̲0jf[DocQhUufv#"3/k9lI +@5ʓ_]= dd gdbM5%դd3Ě[P"Pi i<9dm'0WO S>G(hܣ=Y}wM w'^=㑫;61N f/Xf"B C_`mMssSy>)lͯB)*wtj *c;Bdu HΛ444O724muC,F :Hd2FiGL(≯L5`ŰEEf9U.tF}*jdD-ǏAl'Or=r>ݼgm.=ȽCMC*u+>0_,mw9tfwW4|3 g[vܲ[7?p3#zh=L'=go|̚qf'ʷqyDc|ex?ܼY=^_5=otO_=Uf9N|iTV3ŸTvNt7U1{t7n3ſ8wWR;Tn~&nz3SS+n`8ɫX ~{KKM(ᄂK oJZLJQ.໢jJAPe( 48Pa469r9;o>~zS&輏t9 ɔ-6]_+@^f|YKDžF{YNϞLV$!]TjԮʽuI+צS,ckn&uHWy|5vuS|=!UHӧIu/'׿Ao W߹0ޓ|;u?Pϸmr]7jb^qb݆c{ғϼwì'G^pTv`g{w&k7z'޾; -R ;|Tע^Ɬ˵Tz8Ln[ ?L|.&_Gk̥QZ0y%ޚ|?uoutj^]!/\LZ=3yIr)]b%TS4mTZSN\_| j0D% Er ]Tcf9\ nh;iw3^wGq|_M=ś?KhFrxo}_ ֘kpYy v !b L2R,@j@AzwBA'p  AڨM5Je. 9dƑ yT@y%6EAof>pAA 8 ZV>[ݮߧ8x}wHn˙ q@GgBG7@Gde4OU{Ho5A)Js ̱#% "€(TS 0co)."TSΆd a *z ْ!V5љR>c6)RB6I"CRhImd"@K@%r4͆sD\KB4d8[yy{f ҝZ+{B}!X)Y`M33U+4ǫ,J~j:X'eyetWxwj@EJL/IuqvCJf\IW:L@`4)tY#A 1\ڦCuE0-M^ ه r*!ip#ʪC/)Wn+Q09ny{E Bw0{C_nn/JI6"*6dd:Âj??)fTJ)b] SAezf-?rY:Ĩlx8PK'I],g(1jԵxxNrFvJ+䅜%B3*,3˿}z. ` Y 򺱜5Αr՟? ueB`V*jJZ B4d-01(^i傅!"Yh)g^hL(>:f#ģ/ Mm`%e07 bv8G;XuKf+TPCFoFٝܚt}uV>~HTClr&gǯYݨzXAy=uKƈ]:I@J yM.3fxcsd!/AHLS+C 56xL=Ȟ|<鑾24 :QX%cd2D+@#nx#2=9K1o(00ѣ=Ă6U7.bp/EiyBk]XY !)20F!9 -@*M-U?xXYWZիTE| l2!Y^=9JA" t0لI$ mvʓkYPN:h+"Q,NFazwag5h0_[X#LaF=zQkd!@Y*RN% $ %oB/?ٚI#N1|g]Ou~;g:AR>y)^ePXAJz( q̀ H\"R LEFX\ee^"PX1KEj$0)E$Pys !EE4SԪH#7a:q:ӑ3wTwjL7{Z9?߾w6i # S,cҪUȞ/=Fo _q;$ .Os4jGfAWv'(I-v7g7)>g=rXa-f8[zܩQHZā y=ד GŜ|B$Ɉbm v2`jBHу +C&pXCwO4O᳢z'y=Ӑr~| <S @uP70AuA(Q/XUK)hΈo)^4/mD5]Ok rV+J[fNuIÙ[e93}=Xޠ 9&-(4*=x;j61*UNl˖1r` %eT* ur 9 ;j~A\ju{en:_p!Vm6oE$7ӘLTjVTBA !eN:j&=!օT-]-_)2KS6{)$)k ^K2 k, D/ET$͆sX⬟5LeއصصS=b0Bnϋ_)s`޳n|}E+STP*!v!Ôf9ʁP.-@hhWh>\g-+Y]MX׽mrz39`>{8W"~̼Ur<'Iܗ"P\t)R^dHɻ"u / 5ySɄ;qK9C^W@me%ebdpN}_%1P&%QZޑCeRQqr<זf+Q,l jWʤxˍ[#HL PlLr |_}wB;ʺl{})|w^lgLJj͛VK{7_j?+#&iƠ I9,F1[<Ѣ+)MSRɱv儥jIi +֚pk(,8U6@ TGTm)w,n)_fgqyij} :<܌޿;k쐊1Pe4eԧP͏X"I Ny#ED$ݶʨ Ě"!(E Bq$WV'E-,ʭsXc8mˋju\V_BHƦ,<2"z zF YMP 6AXG1 fё"Y5b$Z (@,j攭 >I*B*KшǩQ7ֈzЈF[YEZzZY&qj%ea`<Y8"m QlM5Gsl@K_q#+|v8c|v>,.觭,)b&)YEܲ1fuwկU՘zHL8cu4(&LR 9b]Rn٣I%"ˌ; YǃEa}3q47+yp:B]>eK )ը`c ^gTQXQcȒ8='3H4nT& X*1{r1=5>AiB@^rz$w#d9wgUu }FCZ_ th=nqXe5L,FzRgw3qEY^nzܜ\r6^@i)o|_n;g[%Zb{7MnѼerrY߽dn^0{<`YֽՙGFbx^9"-5< i 2nL7v7r>yr]B5xt䶩0^XoqGR:l7Vf|-SޜfXĽf;Yiwh6.wF`UAMԄ-bxZ&iw -;6~?mzLQ)e&L|vܼ`ɫ*l։I}}UIھSw\U_N''|@.] (" ¢D,zI"V('`)*"BwVcyNM"Xa~7JL6޸@yզcYcsf)}B%XjX?4c+gnX_]yk.QA)3Z:Mo߻ܨѾ1i1;²P0vYRB ^ wE%格@@kۮm#SIY&/O7aph#_a $~&sh6O#P)DO$W(Ϯo[zm~O>[4D6$meLzϤ_ܘwm4J}[׊Vێ#W6G?l{M<C_ᝉ`N.D rxgVޙT?Q*}Apfb*{9pUpYïFR + Xq|1ps4zVwJtWRhQds LPED9dq b-{W.fF\0kT aEΠF7?݅ގ+gɼvOM}ٷBE!5Xh-yF:H*0Vn5|r6.ǟw-ss+&5+K'9X0taE$mRA:,b0WHyiYğjѠ|H26S@5>\כFSX 9C&>%T\&Ji6aJ +Vvw?M00@p-}g!3S 0|* u0;ڞKMJGXWXRk[;tY}& W}_d~DŽcF x3B&xo|y~&ǯ! KL&d"KL&jg2Q)T|IDX%.yW-X,7F-5ɽi{=?g#/J%Fcg-ʁ"_hY"c /ZޱyJ`v 1NJl:+ ;\Q aˉJt](u2μs۵@Z2@iw̓Cց yX^)p=L,)\4]HF4[F /3_XzR,"XH.7Hu"ƻX?ŢTXH>$dfŒ\TL"`QDC4_#T/+I(E5\J~bXpX.N=1fyl`qgx>@=1}"ZI>z)>_ U>(祡8%*WJ':40 %omـkc2v ;Aq8¿ id1JEJqyʒUaǯ\b &$eX8e8+/FN+@%ƸWeX"p}v:`ٙ0r+!}:JJW=\JGؗd}Ԁ_7E̿w'Q9࣪sލ_`QuJe1~\?Ƽo߾eIo|o.?{JKՊ=}p_~/?ܗï8biZj~o6"gpPHC K)FD_As+[Qߏק}њ l;CtesT 0^F2R1h#$W.:9]yLcyDײ1{А {jEԑZeU>zL*&f *q74FWQ ̮֮{n'*F~͠~:!qJP$(FYމ~* ةO 3(vlxt` hKĨnG:_ϫȗnG3_# |QPi `)1I-#q4 HBZADϲJӘG"h`ʌ ;6`EU @2%89%x,Pl ]|Ŧ,-vEoYɲ$~X"XKIj5!]KT2g 0B\>Suk#wpbOo&wsr' AHRVa S띱Vc&ye4zl"XhD4:|y?3KIN֏窼pH6g1D<vaDXqҎ'F}$vnqr ɼ^E.(qC,ȍ$S{+%R.W,9E4\'f,ep 榜GhS(';ʭlIg_a$P][v_wyeKS]6v I`xoo/w/i M%**! VOp 3ؿ=ch-JE2R>RLzSNq飶ěU])`̙R19qr,b!Ɍ“bᒢ4]ђR߬>~8mt7͓_7(h44Y^x>ČM̓9DsLqϸHE7DX`g Оb` YeU 9I5q`͇ݴ+axK8U:Ug_.RKb^g W1{?h:j^մ|όkYEP*#`ö=sT5w‪hiϓ8皵e: L?Z+ H`nLwmItReIp^&AZ˔"Jve%DR ~*DB[5-l1<wT,@N%ELt bKuZ\ uʃwgYc=9>:Lܼ;Xn%$9Lh-A&1YO`&Ά_CQSvGSѵ>(V|i)i.Nʚf3 9*5MSlʗP-M:^TT@Q!iM%|K*TC9T̚BdUvK{U?14N1* og*gLY4++䆨";00yԱұӒZ}D~е]7QBm( 1pUZ=r!gir:t&Ӂ kerm hޫB|XW6h X"^K P8"(Z`G+eqT}˻gJ/s~+Ws[^~VZ}z15_>C>fg>@>Ow'=7ߒo7_J> Ňi<(҉S>t` ӓ:ϙP;qo*wL],,5n7pmKwf2ϥ2=6W5[]t6tEZ'1]tBQ\6'Hջ,Χ\L>Lɺ+uQ/GerSX&Ge:9Y.}ꧥ{;'r_7eazy&zγÝh̷rlwJxLhyj|j XfFK R-w_6‘K9hP$vvTbyU.<H2|{->6r\[ XGrmfL(U{CXC,:)j+6bHCɪ"F\A3˘J:jާjb$1H:Ĺ=zM;1mʧ&ꇾ}W{ra*]q(|<>a;zt#ڂ"KWa)w.?;;[Y/eAo]o&?/~ۻ,Z~rw'ZWln|L|+\̷hRM/&/^Md6NBԓObO' _Ѓ<&e[&Ң"X1ݻyZc]P;H*9&0^;7>h!"B%YL9 96E Nת*2{5Fθg__eE[z?} ֈwWJL~m V3ݒg,POfrn9D6b3{C;6P :l*j llDvn6;G6B;D6B#N;Bk $'zVAyR@^Vc3Q)T}a (Rr@]h[5:_-CkM!Bpd5O C( %͆R;RY_H&udй[EN̓BTމbfE|NE4ՙZk3:F[`Ɏ;P=s5xԳך@#d\Lބ D֤9QѢg 5Pb8;yfXnؠjdqL=iLe}AK!\,V`lg^ lI}DmemvD`1&(JV*>zR4eXh Q Y }U`HK0z;_}FXtJCʦl9 *9*E(*6Zw`P+~'=!QC cAWcH V * ILDDH$T+'0؜g==p=2B&˴O`1CvPeŢU6 f5zTOLx?jpt lWo)r_+^(Ƴrh}4c4Rbt옣J3d4pq.rvA;!3v=5t8H?rDKGY&$&[J6!fS&3:˲7(Ńa,:1D\M:s:1e0j7/>*%@)g@qZXb,SF6>N%?W< DU L(Q,6 p$ ]JuedgI1ɋ_KD֋D!uBZF>#83wލ~nT[pTQ D!tAa[sAVfs=t+֔ɼpeRFЮmP>VlumC%hJTfR6ιt8cwe/v;?xexT=V 5T5JcD X4"ߩ3ɏOy8lEe+qgK;, د:wgC[ (~I8x(b<ڌ|1'"Wu}*Q;\jyU-;YuHAc`A IFbeqnI_Q0_l0>}sY9{B+eqEə?1܍qDGAG@?^o,ؤНA+2&N+@LhuI]1tӈ>A=VWu_mp\NOIlmLFsʙF d2"r "6 hYc 0Ǥ Q%;[D]B|?dxd VEp'3dJ<yXp}H*cU3-U="U{CZ/MZ2NZߤv$2\noU&-&\@s@3+Fj"jҒusդ\->c@}`Fsun_U]7WMʕ%v4W/\y KGJ\5qQ틹jҒusդcW/\9rWVC,Ll-rȆz3vekVpvu`` 7"*o~39=>?7dpNkq7{t~OfaF)L[z|>ͯdYϓiY>û0.2罰q{@A](@[JKﷲfmo&''Er-6Lgakop&)Ʈb:*wL],l 5ܧN` Ѥ5j1F+`5o\M`޶Kt'qAm9|7i͖"8wn+\w˰{dD`foU썹jΛ&\Hs|pnCod]L㧿}yNv+ }&? ˩ѫ74 +UJ.;l) sj\|BzSdSڨ*$0B)W̺K2B]`tI;Im-GܣC/3W"Nf~?N1ڌ9B:E> z W&eA 9z*Zy3QDZI@2Q\}響NΕ*LѱJ]&tN+C˜Ǔs#vr0CQ!!{VQ9:oT(igY{_cl}jJʫCۀgSr Coh5p taP1QPfr,X%bjGDyGƞ Ǣ5=ƚZrs4Or^ߎ*o:TIwȤHf_qDa04 ~~^`?e?YldA)/JQŇ!(I))`m(@.Zz&4([ VEP9z'a1f_e1uw{>8|~w7"^<똗VqED]_/ll^ݎ>^D(TXbt=%pF)t-)uHHiE&i z ٬!$ f'ku] $/PI"6fgq7DN'yHޛ{=n)b,_ɳTBndh5,_wCyHP~}s(7]Up|O7,NvlױfKQMqv ̰=lMaܣa$j)]-zYAD߲P)^Fֿ(GTf@jdҢ!tNԺW+rh./1onsܝeԮi+>9n.٬TNtw"݋uս=3l[ g=7Pj#e@ID)DFIb4Zl7M5DMd#hXT{8*;Ktԇ(XXI֌ٯwWх8X]B9³NC>_x<4|/ kl"zXȫMf TԈ/Eѩ`dL" msu^ERB#*% hr=CQ2 ru]+rkl҆s݌;ՠXb(yc) " zÃg,H&leWu17Cfd(",,:/DXÈj)[!7#g>\8딱DZQ7ֈzЈFEZHl8a~Xw 5G|R3*ZEb@g0y6[t)'i&{ʶ"瀼̯Y/κpi^A/zqH5Xi gPjh`oK t b Aϡ{sӊ;Շ>43:[󇅍ܻ&XA!_)h*•FARrN e:8qhV?֒T2y&jO-," )ja6Aa^37>ڌn;"i&ʿ=;?;HF!Jv>:vŋ,euPto;pPyyƃb+(ue-5U@L1:D&)u#  ]|e^+<5ã P=R-x|~1Oeu<=?ByӬnvL7yGp*3Жc.՝#BGv2YGA* +JРyit~M}B=S=4O:h6Բ$L"hC6`T !c4 7oy*^i\j0=8޽xRi<`=)p%y ꬭb 6$J%AIAchEf-CCz(j~y`]/PUWotIz߸?['ھ}pVX8~uUˈ5Z+ۛ;fx}fVr3hE+1zLc0]XgK6ˤ Z.x1p X|{nYnsHaaF,s̮m0J9F,&Q&01kV0&8qַM6584Ok,D*(N@!EYVyRIg/BlzV^ɡvW90nw}ko>`[`{]2pKO}}3dyRa|ݾM%IݥwWWyAA(|iljjb`Hz6۫H9d^#=|n_=<^v%1Ox<=Wm{1ˎ[퀴^ iy#4].?z-4R}c%vmPg +vC~W8}᥸K?LO#Xeo.4F. oiZy^jp4*9ł2+4SVTb}xD㇟? _XP_3d_q{NBWCccvrOk+X_ ݰv_ѯKo0l?(\Z-wR?0׿BXhTʏxCՉG̍ϓwGꝱ;OMՄ\<}w\HժIN1$۽sː=/!zׅZt>kGa$ CDV T\RLATԖ2m۶zE%P)p))B.hL$EPeQx䍵D붺9~V;M7s?c GRrI_Œbn'ϱ#D{5A-)(2I2!@:;Sr K2,_O #u4mSŤW9sJ F`ѣ&['AY(j]9u IJl E2:+A$})$:3;HFL ݆Pmـ,q.B\=/ $XX)CE fʣjmdW2f&Eo2 ruk>8b*fCd3$,(5 2#T#v(G=Ezxc3h8x v͡}ȲU-M)*w@Zu(C d V }U,Kuj~wm7ݵ_ep#wt215~$LL1nd|.SpTXs롇:.͉ndwy6kg`5:ҪQS &tW^5pMM]P:H"th0tEM~pῩ|ͧryW4$mE’&d(L" Mc/uh5} ElIQ!-+Bf=ߗđղm&`JPO9+M` A7ލKeՁgKޫhrP̀ |Kmw dP^Tφd'{~3Eo,w) +A*ƞoln^m1Kv05 )Ny;;ȋWmaP'͠V. rC Tjv%BԡP{_ռ=f- 3Ck@B zC%I*Ѫ$I%ׄRRBKT*J|%I.=_s%گ-v&.D;`c xjpNrڎJ(8i:\i(mҦnJe 9{k#"£2*(&_lSlѯL흟B8m 8'p1vU=z}zfӇJ^88s` ' ʹ0nYs_Lb7~5l .:`]ߡ̩^̺)T 9Xs唊^hH!O>JAC1hL`ҡJ:Sj|jʆ"GhleC*LL4)$sE҆b@0 o#D.l3rK'wk<57o$ڸ uj{yiCV%1~` ZKUzOjk ^ tHw^ լjIջǮG5PyDiuF3Wڡ" GUjJg0ńHjZkgj:)T eG{< h/Q.RYhM6K ժ)BV]n ;':eX7-XyFÍk L+a lmW3nsDG#h:'Q(h 0rfSJvgZ 5 ]"+T{&0ߘG?ȾZa!Y1+" B}LZlBlKYiBaP\N4yON. *Kۿқ@cj~A]BC(FFfteXo_ml-;~ sN=y|g$ /_%[*6@ w#,\I bt !V`@O fgof-r/im"":oU?wձ߰)\㳃n g >6$rvsvzZ~"")Rٔ.BPy30v#} 1l<:`p1m֒Np){x@g\'$dJ6A{j n7袯1-ۂޛ3R,+1i& lP X1[=բhÆت#Jj5|h^P|"yߍo1X>aksM_K\Y 9.\stXe.d8圣ί8B,Ʀ̣(=.``yl1aSKېV[B%nUܯU+%#m/A_H/lV[RF0[{;J|m;okЂdE(H%DBV8M$]qbT0D&e? SŌɽsc2-1FP>k"3@2w A59O:trEo;?7e51figV (Ls콾d1E4y!#͉7˹w}Wxyl3qfg6 |fgB߂TޑE*:`կUp-*}ǮbGaؿC{ gfolfo}05&gioGi|7\}\ą}Fl}\>g8 V}K,&.4].r<ۈ~oX7>^CD-wO;S4}O)>EӧhMEӧh)>EӧhMS4}d$8hMS4}OL;Eӧ>MsO)>EӧhMS4}O)>EӧhM5;@Kbz,/GCH^yI[bb^_rdBk^x`w6^=`Ζtqp]"Zt= gSTԀd7&bٍ~4;Ŀ`M؝^wV"[\6.ғ?Sf5)DG/1~Y;˾tanJZ.8( I[^Յ.ֲ$VՅꦉG,dԞg==%ŴH̬+V'S&9Z3GHsQXhJ6yӈZQSg[p1 bhR-qX1qpvcD:Z)ER<8$X'.e &`P)%OlH?T-o,D9 -9$ν!G/3/i'<Q"@Psj!tMKXkuqKI!&䂳}<~:tosA7 ,gY%t51TbN8~Ud.r*׋~|+'77Jz'tȒ]~:Vm2XH\AH)TeJ k@5 {2cKtJDoLַ9EZQlL[rqqk08_/H+ft am7@"//6c־&{|=nkFeE3ozǒФӥ,Wk#*vDia)n^)N+j'|e8"BP7X aHdk3 mˊ}y% pP>#+&T9 G Z7T-d))hu͈,t=}Ȏay"|eǤIx^YV[b ٢˪q7L ^ryQԥZiS+'KA$59eŬFte~ a/ֲQ KEo.~Uaګ0X{ˋk-Zセu֖|zWޒqȵWQ#kaK8;nlbI֘CgL%T]QkATzFD?I֩멥Ki1;e­aqjdIÆ3*|ᰙ\_8* oF[G[riO~fu’NO<=9ҁ=vfMfι ʾFi`ꭚD$9|P3 "DаnIoYnw@fBqHh 0V KxtݨLnQ̣ia=jvvkO،/IsKIuž )Wt0U١=Pc ϚJfoQ"CʢE5:B=P}~`"*%)G n&UT1G6?aGG|t[BMI0VH$8k?$FƑ]^tC+J)rll,H@׺ɒ cJ6@RI; rUjpPWƸ)9lfgx??܃ "cuYȝW iWYf e?CGóϿ-uKߧI9MrKh4&-ɱ>1R0S&=cqu5sEM6~e :tf 7$MbehQrRfl:!FzEf>1%x>3>NO2b+_/{ueum$ڿ^!P٠pzx>7x:^14-TSŬVP%dX g5F RCctޯj־e. ْ{]w}x|my_feDztZϓ9Q"\U$X(`ɦTΠd usfvbyyOrL\~ޅ[lcp`hנ~ fv'ya' xsϗ|o܍;ص33G5O19ajƤ&6ۤӍ?ߍ$=Sϴi]}~^eL7Y\PȃYd!l9Eh`:{Վk=gwz=Uڜ!m֯S*YLs#$4`45_Q%!(dL=Ok(ދ{Ѳ9/k9 p/ <Ģ]q!V@8o*!FާL~&I(EQ(1Q8TX!xGQ1IQRΈ5E9;sח'ѯGij^OcBiq{{毨Z.;uKxasDj)@=;S$ HY fc 0Fjʊ!jl/9sJHGAkmʐch#Yu=1{>$?,v!G/HXrpVQ"٘) bDmDV}v0/_P,U&еX ,fDt9(]H`K"$P+I@fʽefm때 y$L  r%8MJ6C,dA̬ cǔi1? SwRl) mU/#Πx"wSll"Hizfz (iJ]s+: go ?^֧~q6o>6|p7<8?L'G޷=P|;N-s-/N>Wz9ceԏ7] 5Fw\ ̭6槲 olNos_ t ?8K%vQC]yM}|O~SqΗuAVNO^| 8èLaT[*g墥 ͔oYr֚k815G͍+_hj) ٷ=$G׭\'fZ~X&{7vV7,ލ~_j? _nq痶 ;W\]Odz}_0Fsq=uDjouq7#ѷ ha?^zl RW`/F]Ur _bjľJ*)ү@mm+7-3n l| `$;1!ѯtr54l2|΋ϘFFwJFZO?nt`ŨJ/EMWjRi젦C5MEW^f~wFcfzD=-#~MqOcM.lҢ/U&pvrGX:Ɠy?5H' cŭ2XBغjd`!٧KkdfT&:d_3S2JQPL(S(2(UdwțhClC>Cp(#^Z@a% +/2(R)iKoi)v5>\8 4j\OD>A]ᠮ]zi5.kΦ̘M*%Jxuw51"GȣI HxtsSn(?͓98MK?__ǟƓ u+|:=3޽Wi0?ON >}׮<ބ0J|6k?;mCHc!Đ'@*mgTJo/* \'lu^&WgQ@V3uʥ>zz#>ϘtB"b^S1T0w1EI|.3gX&!۔6F*|N\qDdyzD% {ĢBpr ].9jWlqH|_ L:L[rlÛ}N}jlg.VzrK%|jkML8kQؤ'5&Hei0QHk4SӁ. MuІi㞴1}M=d@f߬k1mv9-\w/4ku6fcċ.N &3W5a7a엝jx@<:k a3:K@2d,@3ZdH 'T@vzu ע́5O-DEn6/8kx=+}\Nں9Dy;Ze jm67WopGJ\B slԝ0NHVLТG]0viˇ_T7'6}$˥@˂sdģgq CS$XA Akst % "R huģ;cM\~tt!JX:Ս5a%xEH>,MKo* HbN`ej Xӷvd Ҁ1`m22!37{Cʦ mm9_q4dl6tĆ2qE9`%EbCfqNzqOx@vVpٲ`F#Y]ҮَJ׵i\=Ȟ|xaYyyZ4 (1 ba CJ'u"s:_hm/qolg%C]ࡰabm!KD,jS%^=U[Y y!ŵtrq1뚰Đ7/9a4@|m JҔ$} kԁۏ5"HASD`6:m2IKIOAShA: mfp2aR߹0 H"]wRF 夃"77 0d LG8% ;>3ZX4fe},r(!M0WK bW+r ;':aZ4 tUdvQƿ/X{6A4l-UrYh2]CLWOty.;o\)9(,7X[1cH@$Qqeh]I"͚~q_ GΚZ5f_\jl1[Z<ĴuUyiynk߹|<7A3Z+TD-Ok;dA2) Z`gkuNI.G J(rU/HDQ61zUvV9%g[@S -1N4(["ΟYyFӘtnw(⡽*{RA !eN:j궐u!UGso@$M  Jm{gPv0( jo#"MOXd->a v>E&8RF._%ٲcIL٭qj6Sz\DA(9/-4yRβ=YK9ABZdcԔ29)]jLoDPRbYTu., ^W,Mc:L{!{H69ޏ-$geQjXrIG/B"51Dp>)Y&`PEjbdڍR k@)F9H2cd%RՐ5Ś6.!JLx=+2-9<9iM5vZ;zCk}wxE3y}<'} &lwv90T>fd[C3++.$ D6U4%y=G^ p:"( =9eor`` 4 sqfXlf(Xx-Q}Ue;?Nivi_5W{bĎT@Db,dv:z*Y`1ʗhg{S"&ԉ+|SSMP1[aS Fr-fE ,˺Nt9q6-s0Z͎Q{df|MK@ӆ˜p(7Y3;'jz{jD n!Lm mlHi@^daɒ^ɰQ| jfTlTMp3qao>5^ ͏cQ5FD5"∈{?K  "6"'!|X9 X.IԪFZT46b,((KřH19&*ؒh I:8̯:2..r i,91.∋#.zYE޸koPZK>D]B%!9FXBf:*G\<.Oq,x@u}q>-]F>(?dFz<7z?>#2.i~0>^?Tufso ¾I+F/?XeޮM~x3[XUᡸC%_Pi2a̡D>wvKl^d|j>]`?MVW>w3]//l|]ͯE}?}`\OT_x;k>Lk{ahzd  xP84V}QmrΙ5`0둘|X\M{$Zkën&%o M_|5=?[A{"ЊZ( )(]ؚNO18ڵ ,"KGLT,s S9ҐЏtEܱn~i?ۋ>|WEZxwZmz.۞7O12I4h^P$8E0NyQ:M"($Pz03Cq*Z$ 9'#kAF!6ö`("H IURr*itwl͢&%K@u:fi8$Bf1A f0qwcMn}=5y+ FXb(ClPZ9e/XA빝9I xfg'!*Bz S_*&d¿@ZEy*]۟G>xqZ)sZY6{yaKrR騲2f) O6I JGow96ء-C%C"Rڲu@F|W^7 >-fm^~0.8YddNɱ%7=c"JڦT7]ax(q=J!pLkwu)-N]]AwiKyRw5SY4) ̂'^ H^0ZMtO{3UW O,:fc֭O~םqz{q4q`J.ç=7 .x%[t тX~RZ.veKh;ɓ0j]62 brP21!`AdB0k]^ ;i5ðVd$+\9O $X\Rec4tzM@T^wln{ZӶpX58W#TQ?O)T*ky<שЇr j^v٢Rp(a*ΐL(K4q'EvѮ[]}'7.'<,]] ?@KO&AQzwtqUuieÙmzĦjm?./5}_[dmv{U읋ȴi!DR^{0Eb0T !-g?lmՠ>*$ŮEv=Yi rI.dgjQF-Ie<8Wbaw$q6c;6d58|]ZȷOWS2BLJx>-_r^k>b[%["+bok52\?MQcx}&ස-{˼ߎ37Z9l_j*c- dyءa;$ '-Sf҂g~h+ %nc9n(k-a芲m% `@DsTɮdI$a$πWGDgGB-ט5 /E4/xox"j9wߢ.*]?8 M&|'Jy"FA)E_GĤV6AJA -5T1R:R{ JyjHul@:6ϩ]9tnv#ʙb[u;pfиLہX!o*ټ\y,bCtCR`eN `Ō fi9 ͣFT)цv]0Ȩ"!@ZK ZdrȐ=Af^jmb/"E1mm#bklKpсϙ"B3𝉼- A(1xFi4灬r]wl@b(DӖ쁮׫ûnWFȽ)UAo{R9l$uj0òZϒWn g=4nFgVa>ospʑM̛\KtL!M:.S1T7Mj|h^,oO'*E[nm<;{65GޝKQ֞z6}ݳQ̫eERy´*)}dqُ$8MipZ4[_/P~\ޠ9 {ZA k7\]+p&?e]+W=^[ן OVC+2[]rNĠk/8F^gSs*wpEeMtT]wtXb}b~,-Zo\(t ]4lѵu&)]3}xK©1!Ba.wR#J^dIYH_IޟCnHw>Wwݘu¡{rΌ[^>Ԝ>+Ozmٹ8C=۳,__,O/wBtX4A +%.yk;d"Jf"ȃN#'SA`5 3!\FlW1Γ5rЋ5%IД"'p,LIM(#a@M1 g*)b3q@W<$fcw7hynlN5dW)8:?m$fW0}bJVzORN*N;/5vhIuv,<#P()J9iǔ3Jjż2o ΋ u)hZ&*zm.>Wl4z6dl]D&O%>f^J+3z5FpV-(^+q3ʊC1墰;="Ѐ; &B1VLz6(eT#K)blF+l JUR6n=ig/"h4192z RXϒcJ2CUN)Y֯xq% JPV*i$VR/TE;wm[qW"tsf̓!tnZO(0`*v>+,efR\"䄃lp_ M' UϦjԾhA %#D%s952FBBE?,.0> RtHrM!QQ)晴|80`PaT/W,{:瑫+z,fsƟ&&d!ܠP 2Q X˫O?;{8?FUl^nxSޕMiӎ{GR i:2W.fT9H"LR0}YɬqсӢfRD1.6nEDvuQP_z.7Du>3i'RS/źXZ'!O*1yqqqKn;Yȱa{A}b|(2mN9†H􈓛%䍂MR/5:`7OowPJ$\MyRo9S,Ol%e} PK*b mZU BYx|y %߻-ix-ty^舋q6T4e$KǦnYḁ%,?ܭ۴[j^uU ̴7ݛ;!/A_qTjq0eB.>;Bʩ5Ը ]OKH6lE y9{FbT6)uII\`j&ő^ݑ6J6-fhsgB(%T> MCUl#oXP()Rgzug:]{`aXм#\SE_= ]Td}nnsIs6\"g1&~c~FnegN:p'L+ x良_-{1)S'Rr0yldT|ʀ1X^%4-jL4~M2:5gtjHK_=ӼnrFY_r՟E|-uX & !nuBNE,/.K LgP \E=W.g46K- j# 'R mEEt1*М+B~/E5Ob7[JlÆ_*BO(H'ѾDԧW_>_RCjY+||,1y0]Dѷ`>x5H?'{E3ةhSzd{)i O觙G U\NBeOFhj-[LR!Uǜm# rűӒ-(F&5 M,RsC fl69b5Q3Ĝsp(O~D8| oDvڌeW'/yMv{vD fJ+c9TUJ`LP^I CW}{j.ɸVbdOIlEʱG!S+Ք&K uQ4ѥH g?ʳ~yq~~vikQOb(|#e$,SkA&^Nכ;iu㈢!Cq~v,mJ!fTRjA{ J3˃a2: F1{S42(SIAdr8>fe(U.kt؁X% ,¹d DRh{^zqbNsda9PP> sҘR\\D;?/H6b i,,u:sL٣:QԩeS+AHz@K\];RQ' { 28)]va9凋Mgt_gUrCs^Z]^|ݤHRWh}vjȦ%MMɧ7#@?oIQyZ$z1>a,6M* XCt"տe(2'.$ ={w -[bku\=y)(78l8qW,X,\͂/#vyՙ냛UT_׼5y_0峳/g%H3Pb,I5>UK:gKI& ĦGElĽf>ItZ R*%$K Qُ|%w$lJ]P{aOY|SZΒTH"DN:eg@6ɨۻbɈ<|\3 Y!UѬzQFGƾG5*FY)RQrÍI8Df""FD\qA'%ԅdz=֘P>fI-9C1E>jMj>@u)R8R՛52PNl 6l8T8z:ל:YɱHq\\p}PHjY T52rJ+uŗc CwݦSu}Y̑56 zD?Q*hG?^Mw*QܔYڼ=`Bl &SQL"4̹ö˷)Irzyja0[oǻϷ;=\릴̮)f#f y6P2]SzX#$%TE(e|ͤ4# n<]p漧?}<#L|qR^zzԟeO>9{u* X@E*C)Ar"T^1 bv}A4҂/ݖf o -"c VT9DEA6|,0.H Dl^)sB;#пG{Z<gV/6ءg<}m#ң ^O <%f=[b,>0!F)5i}*XJъ Yg',!OJȾ)H \Sj4 CBN᳇ġ}z)N?c=Ki}/ִh[Q`ɛP 3YJ)f.0$L9?>~%/Ctvz2tCd6VW}O^x n #Nwu=msܝ_'7q;'pbz+ؿN'plpsgX2xA_ <_m#y]!%E_+Kz 5_s.^&CdT$'kү9hqoo<Xk/PLl+I['AcMDXKT4^@nu4\a  gU3N};}~ϒF\))P y)r?F|O|Rv^NJ#*cz &D,!7aKKrף]]Cҳ3{5ۿ:&go21OiKCR3h 1lo\RQPE/)jӔsE1"1NtI}ϻr3`| ワջK=O_tFK_CޠÉ~׫~7:'yBx8՟ ]O=ps .[vk_w{Ah[l6LÙփi{;>teQ>04KnH&d+`jNlnf0~6ΧQc͌&G- 2 ĆRJW0ʪ{eMJ9~l28CKC`g[>!J@Zc.s>P.j[|:?;4?v'vNe(>*^^f5*SR0 kz^x!bDsVAnlju!\)[#,11s,z4.*!& dta90M{ x|DNo0<Ācf >rW\>OQ_~BBw1ԛ7u مXtP-1[ }'6J6sSܜKr{h^<گ61xY(@1nr\0&@B_JAԳZՉJ' UMco^.*gJ{_-]6@}f74&ɗAiYH;eilkm `]E佼;X{iɆ3Wf1] l? Վ~M*xz0~`.i;)JӏCyvCiv㸨IY)g|ղJٻ^w)B]57e#tZb겓m&<-fzx~3lw_OK\/w>'Ͼ`PɁ?%~AO?v&.ⷨFyy{c;s73Us4a_ JNP"V"XArT 4 I o_vr/{:q9lC'A-X渣9!X4Fc,1Mh$6[EpU%r&ijb (w[ 9)P5rL^-gK#xz. ELjݓlʊe:nxtru܃\ mr6BFWlmG|&YLh.]MJ.Hu|.>v}%gO^(? 5ϛA<[B?Ӈoxphv>XZYWwYgE][vmHtyǼ/n͉{Uzj˭]Ju(9i ҕ*l]e5j]eo f9-]ÿI 3bCWsrtrvtQ.Bۡ+R B4g*m]e\:]e`Zzt'$BT@cJ笉Yjh:uBȖ"]I.-/x {̴+WMeÂF1aw4w`!RKˇw`<)mq?%ӗ)?6Di}_qWWݽ39'}9vK;kgЏ%& 2kIeB*"DG)9QN()'˼댾ton_S:M+傒#~98m+hmZvtZ7igޜ &ʓ;(Uj0yvpl |t#\.JW;}nt#J'EW@Iٖiz*)gAt+ph ]!Z8Ɏ(tJ1L;bRğvkLSRFN?3JZ|%#~cE?LS;W?qؽCi8s^3E%]$^w f̀es5U13Zs4!BEˬoY!o*71trhn HWB ++ 42Z&N2J -]AB7KT2\BWHRV]ER)$ʀ}VҎpEcHg%Ȗ ]i za>=J;^}I9cJWoRms\2`Q ZJԩtFXosB@V%ھ^?<&J;2,gd=7h J2fp2JѺſn6j'l ~\ >[n(͉u芶toSΜ6ٹQx|]9h!u_gʦ?2IJ*rC* Vi]1m u1\n׋ r_?х;Bצ#/ &,f-zݦgZM%s5ei#i4Qm! 彍'|j &B 4{z+x\9<FCo%LhUm}gT3ۻ_&2OgG;}. QY#66)zpKwQI %.S%`(yIzԮ۞Rspmi3u19yK;_ڑ'}lo}CͶDH\kJ=!&ji.*`wItOZ#U!2[hvU׺qxYLW(Wgg4+<*.0~p "rz1?_Cb8t~_r͂']?_lDOvKSI~8߮1+sazaܰul~zؿKvېBvaȮlLk2ɢ).J Oi=hWrVL8vD+Ǚ$%L_шЊCĖm== =@ZU¢Ͻv2L,䵍Ai fF7T5,>yO4 8PPKH!&&(řP;DP8* Y@g+UΑܽ&HCrIreh)pG 4PoT.Pi])@ igRi o5ELh) &BlKu+0"8 5v|qMQ[kֲ| YqȍodF1zz?Z6rk̙.M9<,(i,gsBbR8G{Ѧ|Χ?)'Y7''br"- ٨y2YV L2@H*P r~ zR:%/=fu:aʭwҭçq&vh~ҧOEz/ JwIDq>dspvpA3x^aQi{\vl 0[bI$H>"UU"UWE'6Pv"NA,BOa2 O` h[K=Q3FAA5|(p! 㰙d<8& `Fd7=9m?}!H>}حvȈɬ+?p$vnƍh2.,6 2RJ-U 5rrNfp3~n-0w0 Zz^tWc 0N>y>^ehVZt]YHf̎9+n#/do om_%${iY_}d݁-/*q#|4Ij[t/hlP=Gtq*@3>ϰ^<^$_{P w uul@"GVmGObju/ 7RDe.)֐Oξ,Mi8?9=mЗSKm 7up;9%mXu#H.GGTs,}x`~v-ѻ˰bͫcM7pkzҹm3AQ{Kb2&識k/VbWwy򠍟~ĝegĖܘ5 }٧t.SAVWB.'j,Q'Tj4OAKLisW`x]YVBv+)Lվ'/eJrAM|UMы5F b[ӱTTL6{5aqmՈ\.aOkR+ҩT=_;|ޭb1|fC˶kd:H&+]'/?[=kjtvOkxk 8=R崍+M)k1fqތMr" ]b1#NYK^Yd:l+TZ/8k aHo+89JYaqrqJ_w׻ `?m}eK[5⼈;K"Jdt$]FĨYDD"+ո!Jƨ Hjs|L c&5C=~ A&Mkq#DruJڳKlf!V։ viMyƄ!MڍzRQT Ik*шM-"|bO5**CYSlw*иw4 zAz Oz_]wڝ:/'Նnޤj"7);vm 寃5vy8;9݉kC{E=wΡ|:o>폭Qog>|rĢ,=7"jEG\(-6d:q*nL֡CĄqLV;qo;&][X;y7x{,ݹ Jϴ/jr/ Wokyic4g?\Ƶﳘ\iofI^9}oWk'/ϛz )蟃h̶s2?wvˇ ?e} .j |,ݰ.^[R9$m RmN7a[tX楜ux<P&v: *v1D*.<{HAg~r- r8ydȬ joP(N X 8*1W-LMF%5{S511HAB`_B/ J~ ?j 3C pn'DGFd;XM$IVoѺqmD -wG~4dkAoaREƶ~&Rz^zA_9BvI`SeȈuԘ1Z Y`'y 13+q/ جOzgu.c7ȸ}wӶc@=Tm^^T#?˧/c3яg~c/?^ܒr܏\bəS - Jǒ`.lO]aKݻ56H>h0E(NCxMz ]m&rZ߀/6z7[+^:[)d4ljle7g~ϳU҅xF*fo&cߍf&8rrqvbfko{3@F 7TccCySRIUk扊qk3b#4b#4iG.=<)Xqт9WODGL+ [@"%uIمv[p,>Rl 5E#(RP>Fh6tLKaۥU'uf=.|~y3)ypц+E/wY >"L5Ij5dln/O'={IMɸ AD֤9QѢg10PCvN [96 qr4Ra迂>QK!\,z6;{RlSVAmRD`1&0J""W)EZmHчIT$x_=ٲEg> &(`!VaU1W.D)PSjYaZAt~[%4VIۆogkmdcQDP+^+ֳ*eNhRbt옣J>ğ4pq .A;C 5A/mcMḿ C.b.D˞$$LqO=Lpzl\@ ( ¡3_;Vu>:% IgcPovP P,[eJkЭ8ʂCQ!K|d!Қ)Ī_A=rbMn#֧5rXOʙG6X19CE8e x>_=/BMH&5mĞZ?KE(lB&(>~e^V)#JʺgI1`oh/nREʧڶb7O4?~7=jSᨢ tBpPsAvfs#pN芴ٓ)yʤ"jW"34(H+][Pߠ(},.peRGW.#LK<w~Qya֋ۡиP IUj,Q-U~`2B# g__ٕxtD;Fy9u־?Z:PԎ],>tѡEo 6SyG*yPw{եG+0F˫2ed?U[ G jv@EnP`E̟8A{Ϸ4y˂ RoN6G?*'tD)khi,ڤНO_ WdLWԡ.꒺1vӄ﫚>Œ=TW5\"$/h0Tkd03pkgJ؜L҃BDp^`m.njmIy08F cgcB|2 ѡ=.54ed5>)Kyy3í|,{\M>M¯ac|x|b@O3?Ƌfm.ymH*^rп#ՔB]Vv@uS|Z {_LFYȰb&ac'̚jT)ԕZ-)®"tmw_w={+-JcT&DRޫ`J'2hd=Zb )Y4+sS=x qBBΒeP&Pf`Xܢ ռ{KƄ ;hgQN/IaǧRSR)Pl !O2B $b ʻR^R6䈭Hև-U?>c/z~_(iG:D/ƖAk &m c|ZLx=~RUjm?eQAy(JB "2O2To$,B9"#R k&ڰ&6Mа-,W)i0GˠA@b߽gcVj҂˰8RD]EDXZU\sx|͖Ⱦ'+}L >^\݌gzX<ʅt_[wi(% <_IQsK96 vvxr^s6^֟)||!DTJљwcqTYι|y$NttΦ|ȉV~ְǢWd<1`y&O˳ fvLP6NS=/ws_GO>3P .*3jVSJtK<;T\RQ4elfc"|!9e]1zU=XbLVS۸B 1ETu\@RHH줡2E*gG=A/5K6gfWǁ->PTQє-ۅ_>a7S+8Ygi7kN]t{ ^S :-[p-A !eNE H肱!նٔǝ2I]LvR缬Vdٓw%d EE  JGl/EjflvWpp5\uUZ  ~E?rx#'>NdF5=IIF܌O q|1 }bdË6u΄8K^kBko3"8'ԫIDpH ` F" awG \Y}T$-qc<:"(S%}N"fH"$ *k&Ύ}JA?'R4M_Qs8f5\bNߐ !]EPMz+'oŒ m(SoXY][ʞ1XPS-LbcEm&~*p~`jI5['{Acٽ#l\ o};[Z=goH5"=-5O_~,|WCF(5D"q@AQ!/]H" /d Ps:! ْOMRzrkm*.sRг\.K3ejَlfXle싅NAv,<(KTZkg۩ZqqL/h488~##vHE@BI2:R}`K-E/)Xm9Adg'% vu+l`0XdM\Q2:Feu9S+q#vӒy2ԭVǩ-M+ JM±]FPDgf~P;:(& Ec -=d %_daɢѰQ ٢b}Iَ3.߻ b/"BcD7K N֬3YN0$DVzЅLP\ k"aYQB0SI%Sg8lS:,=ct6Oy=^^]_ŌAK_Ks[</ )TT4RJN`VB" oSODQA˸CB R9*X rޒց@0Xa\HzI6m3)BΚbbol:ím|a=xHğЫ@x#lbIR1S& X׹TJoy/^cL ٣ [")2S8|uE"`r]izW5дX- j mIFQk"l/ B6ZxUE_ I|rW*A12H rFfǘI #Yk@KrȒc0??:#]rl<;↳ɭ1Y8Y1҉_~YnYZ<r{X=:IFFgst\1n#n\=kv{݊Y7Ww߷ +GDx=4K6ՆW.u=?t'U[SA^ߘvx|aNI?/H@p3fË t~`~r~G-c>pOz6v$-.=E&,ft›w]+IZWz_ɕ#2'V|u5mIfڹ]v~qZRa'C oø'Mk:n D/{>]u1,ֱl;9QWƛa_ k˭d9i.|qL>@f-:zZNI'16^a8G}f͍ĹEu`ҰCjߛ~2,vsÙR*xd%ۧYG~ɎP9܊vk5XDBgS Y;<4:)y %Q&,,H^K G !1%c@A%KHWw|9ToM}8txK^B%YH0ݍCaf2bHy4z+%s 6E"iPrFуX>[ݦb! YH\J95 ol@3^TL\ZMpGm!GK߱tFΆ^o!H/)#)CN*c6Y 6y!0Vd̤ Ơ,.$NRl˗8>mZh#Sd$eA;oZ:.*1aedL:g(̢6Y#IV*U@HBS(B^u[цgp3wZt_m(3yvys[eC-*n1_yi2&~<&+ojyd_Olc.V HN,m61`Њhq2%MFdBJdY #:tS2p5Ƅ̽õL/vFz8P ; CZ9&'^wȾVp`}}BA&4 E7^s@ix.:@C2'4AsKo%hh4վ]Xۄ8v'C:Mꒂ*Z)TYH#D23H"Ldh2O,ci;Oi(NuOQ˧4RQ~=!:ǻikG}h[: [RzR=-O[8_N{3ܐHBfy^{MʓmK)uʹհnzz5J?8&  o q}zQv57/3`+M޿)I&?eKhu'97{'O%=8F.gߟ~n7j/34z3nI.jDYާ@ zG;oΧ5* L[ӦS={LB`{Fg:iy;l_F-u>hҪ~_oi糄ʝ`PFލ= >`~G:hI âEwxNx^21we}0qG.ej O:c/0utYs1ѐެx.]|U՜ JƁuئ': ` w2YN0N39]a;' },!.,|6*fQJ:&aa^99Ą/Gg.m˦ꭴjɪH`#P0ԒRlP \M6y7&vąUA5dXp|,Y8>.gh:QaxFH)$ɘ= xBZcrhRc,s.D}Z;jv}`Ȱ1 hZNnwJxz;|k-ȂBҵ#y*Ux> +"4ɨB.SQWZǮ ثר4i+'`ɨB.秢 ]]*{t*ՕL*w@*ru*P{ꊨK^]"u ʒ;vX= +WG~xq>ɭ^NZ&ϥ8'_v0NFkZ;^fp.9bXͿ-3bI#7B@sDFA0Nݷ%q@6`ruYW~ogxa߽(`a{}m%2vJg" ( v*P+>ȮPzr0p;&?`X]=\pXu$jèQq+u%zu҃ҚRWDdA*PtѢWWH]Y4rףo_7n<)lIkߔO:79kR=eWIRqc*ftN3^[KFCi0_$>b39 )7 -e'߹[GO\ΉFJj{H2"i&ŨFAa'~f)ٲ}]_^У+qwޛ:F8l`k6:oRlu/B{=Z%ta s9WhODB܇0\U9z2:U}UC_աWu:Wu3U}UC/k}UdO w *!Dwl؅N:WSQʃ,@(ǡ)ܡɏΡYC}q %8t . (%G'V@&%3wFHKd(zD4,%E0 ؔG> rY\u錜 QT7_Q-mxYˡ,j8V^?ߠy@M *Įu%Tͮم7zts[闒[?D$m2 WՍ+@huץ+3yGdOW_ZW֫Nt:o*_=uԲUíwjzyx}͉=/\hEŘ70of.❦_]qthXMc! ҨWs֜ tSnMusx36+۽T_z@uP;_ xc#W~x }'N%j]nƣ߿B54Bߜ{%.4GE$*L@QfLY1,yrF$0s''nK|-zsuyF{?0FwND o;rИ 1NEsQn@\ B7O{$(>KlϳVF_jb 1y̾Cv͖j݇MÙ~ƞ^" q2wx B:J}wx E_V%]}ɮdW_/[Xi8rM*wM^.t1B|坧۫<}έ=uuj\ :axt:`A+]`%I&F ) !]֥Ȳ^gmmF(#I/Y2^J3y[P !x5!μ nE؄ɝewւX#7M`2N3Ojs,J[]+u9+ V^z[e.ȽNGC/ÝXXtHbrI֤ dNj SQDDg!jt 1H0ZEiGS+ǽgS{|R IhH}]9dj!<=?'W3m v K`zɱs(Ϥ,d+%V 0! ($;kptq\QI|/JRi!rp79f$R1miY}{ZguA!\9#^$D>okh/ѣSZOp]늜~C)$/(oh|BGhUz|)>)ϘS^:J$(ȴҙI9gU85+`2K/%3ɝ]nxyh)CÝ#Ƚ | :YdLpBˆr6*j:3Dz ,bKRxFnI$LԹ\Y٤R> X:#gC9 fz*+MJKBH˛d Ȃ4ް(8%\1.w.Qض|,@yZKj5tڐ]J%H!)!hTfqOiqH4Et.4#ٿƖ_1ވzc@)lIɒ,~)ŢYETVx|K'/$EA'lA+d78kCDeF ]01y:yСE+}M!2\0*)=9K$1PT@:@*(lI~ (LW++QjP&A˂ TP.fBfZ0ZP;҄x"A5 (ڻNh'}֟oL;F /A}~3o!?lKs"!fhP*!^#U^c@U婎j-S9;(B.<u(8`lv W bw7IwP"k\n K${t)=pNiRz['ǎ 9{bJ(ARRZͩV:K|%eji/vub6~zJRݬm TENCpdF|5ӶD"D܋4q'vlihU0ɚHK.L^t*xKULYSd)M/Az U%)% |z(#a@M13ZB&JcF̜/ s?9޼K6/Dgh=->6|ܷo?|݁[v % b:~&zJ rzGi4|T0x3':D7%T#ū7^K*Y"0cz7M_XĻmGZτp8u良~zǻlQ7`jJ?eQ$\.Uq<(8 !K#¹ zܥ8JbQ&HP%&UR[Jd(& %ufl_c8V1Be8eWcz@yx_|y쥲|n{vȅ)h'V%cjc2NY95c[%3ȵ4KQ}6+DffF)[ֵ38V={:|&k1/C:)fE>'L>BP~&c@&g"ai-!?EGxb4bnʤ]ds2(0EGYdA ivmD%u\pr(W(MPT$p!g3svTƐ zg$4a4ŰLk -B xHQwob5SJE''T=!BiOP: `D.PPїM'k}dZH]HC6>IM(cᐳхiNTB^ f! y[zޅT늑i G#Oc_O$.YŒK$ *HdZX瓒5U&< !*mmycF^Bm(>e]ZOd+3F$V̏DMZp~X+sv(fbXZ~:ɞګ ZB"}7o=+Ý|鲶$ŷ@6t5w_?\ۉy+j$s gH\ |$H@6EhK6|.IrdXt1dPK5Vz` $M+|mSSV$VMAb;v|3z Ͳu]+s{8y2%[Sڲ{`7Ĝ4(IN9pɛ^G 0)M<+ؐ!32,:1_d_Ö%SaGfDŤ9e=fl REUxfc_{D=b%x^ A'$B !NetNV5Ӣch,27XȢ,Da6KDpL2UC*yW/Ά:fd_"v}H5*HTX9_k_QF=>$€!dBGr'Ӫwucpabkt?r1Gn] |/9 rFzף) m !_Lj/NVSSDk|5vZؙ\;s#R1hG+ <  Bqh,t6F[̓oO=q:`~c Њ C Jf٩)Ǭ]a,2uZoD|$ŤtJQ{C݌Eܬu8x59osۋEM%n댿=,{>=1 d.Z YA`8eFk7uGeQ=A%R=I`6$'*+cGHPXo|xL (E٨zZmF= fvq|b}O0^j^e|`M)ីޣP|#Le (Oiف*x}5dTaBrHhx?ώ tB؄ͻslK9ܝtAl*#&YiZ{oɎmϡ:]͖-S+mO.tCW\4v FX5"-~2xy٥.Iߚ9VkʒBHO;*dbB9dm ] FEىEhKwxN[;窋WaGdNƓRX\Nѱ7tR8Sy3թ24/]-.R2(]#u}>RG"%]#u}H]#u}>RGH]#u}>RGH]#u}>RGH]#Ս>RGN#u}>m׀yKf̏"}ֈH9_#y$h J8`fPP(I)ɏ$A*ͯ{r򔞜hG>[SpyJ'p_ιm0!(HAQT%J{2HQT!QiZ>n9w$+hc>M@gl p_C^xAÉ~ݫ__oǓ5&n !;x<%o|'.-|_wWL ~HِJE&77W\^NK .Ej~h>vKIlqK=r>yT>㶗t5{Zso}YBʃі{?%A8~A Q05u5uH\hg飯bI$Ӌ`bFy\ExYTRפ۬IO-ڌn܆L7뉷M'_{x&?{ȑ$y~ 0d3`csW'-zx=3$EQrD6%J;9͞~WU{IDB3ʙ*!HHG FIf%!Π[kn6t,7y3Z|)x`>og8 dYP ߄!(gƗH5~PL2RkEG %MfZ$psS-%^HRYBx*ZAɣA8a E!x*T*j#r!(\Xύ$!JʘT&H@xF~>(DK+kUD;%x$NKP*BqR@uQP67T$52J1d#jk%NR4𒣽'0v^kg9Sjk@ch\1 BP._Q!+D=U6'&2'bJinDFr9 E4e8D1.@)E/̨ņQcf RN˱$j)F'=Qzj|%Jf!osQHrkɦ&tN?-3J0$h4\;Ը@¬|aMNn}-"%OgumP1\8eܧ~z0$iW=lU')9lDmPW^ZkMˣ*V Wm495~vݛ%kHaB~\W}޶6w"{|w߾"2[WC2'fډ!Qi|ޤ7dܖ7aST,5Xw}$3F߶Ϫ]ƛWFUOmY\M5mCpa髌按5V.3.{XKL~sjm7Vxn>m7q|A avW}څw)K~8EL-Z@NY܂Kw]גt+ɎJyk_Ep{"))\ƻ ST{)ԧ ?Gs}BW&54Y:>F RkF((EjY 4Z?d:=xurtge2"^}t'Ox=հSd'1.8J9,DR44BRr+6$'rrIj A`Y6ޓdJjS-ga~99kmogF~xcPQu9ٖg8tݶE?E^,˷s>w:LrgzސZ(؂Enz~ a5ƿ/dLHc`iie(>{<dlv2Y>OL_$I/ۍٲ22"00/X,0]`e1ik,SjIn{0}#D!$2 &ɨ8ȌC({]/y…=i=z51>x|v&[ͭ!g[[ӯjL>. A5;;.e 5~Y/˘Z:bf]Ꮅ@r;:e1T(uU+݈:ﺩ݁ 6ȿUPL)ʅ ɹz Gշ.pW7l7Q]9"() s±JX\4)X\$?) Mt(rv7:Vk8H5/U(PcM#::{fy1hBKfVα >"ju6N#&bIrvF'9;G`LJu5N81'W@,-2Z*z̶ʘ2U:ڲaZG t%}#%IKqXPƁl0r^~ ĬgӷUM_AE,[Q+ FMFGR:93^P$Ccz/z_}6M SalYrTTU̕6\):aZAty2I]Ǘ?x +{M$mÝmKJF_Ec  ,|=ģgu@S,Q˴MDblJ/gʚ4pqƮA:㐇JX:xN6!!]?ThiP H.+,*E"(œfS8tkGƨGk s@a @ @# @Sݘ+%H<}/tޮsy{9DKt_R;wDtEK)AOS񛟩]gn״Ɇoפ(FZ{G1uٲ.+x+* j| W;(Nz6&gWMtƃttWU~KbܫFbdwSaݴ8N@H<r^Yv[矏I~ֹv^DvJZD =Mh6փӋծ=tCgA>t؝ /쎩VxO.977Wպ%r+i-$Yh@hmⵐ!?69]TFw26[>,bǤWcvwں2ݒ[v~1ݓ/oymv~8?=23Sxװ莉s(/'ND[+\=jӮ9{z1#=h>Z\eٯ(*i &m9rVlٗe8nfvQyuOZ CkT>⑴צe.طd!)+v <^?|s@6oWKz&Ejg*9YC0 =?a@D+lRGpxyy߿PkbZZvK҆L&|N@) OvH%EQxR0p6o827$kwUmtL+~::.62˽^e`(a_qV͘Cșr Ṷ*Xae#/7rWp-9nMzԺ~[yDlw7ur֯"ӡaCschBKfVα >"ju6N#&bIrvW##0&%mc:G:28srDΒ"clkќ)YP5Ө-+F.uٻ8rW:9 sژmc;&ik,Uݿ~ZURC[*#A6JU\VҤV/̩"Ālp_P/2Td*(}5 ZP*aB}BrN62~NPe tS8ߤ "!\l1yo 35HƉWL&~%Tk׋Y':%Ϯ+z,fcsbJS! !皉CWWDz^FغfQl%<_w%M-{Ckݧ!Ӵd$V.faٗ@*:,JG%ƉIͰD"[ѱ}SZ/zy*7D|f}6BSjI=qmU8p r# 11=]Sa\ﰃ}bq6VM* rDG HM^)؄"`#xbzj|9$1O`6%D4)((rB(2<90\T;(׬lЪn(* 7.^"ᔻ,wޛ#PS>8$!SO4 UV׉,L+ fs`;4=%Vnd_<1~IBGYi!ajV2F9 *KSLPbJ)}y`1yiTg]GA[MGVy^[&/S͋*r"L=FuxI4Nbp_+R}_7'Ub ɩ貨o59p(*v3x!GGH1QazMC`BLFJ߰`!Ic(ttg:;_cwﶅN;>5U5끲RҜqrlY݀c s6\Y'1&,G7wu\w^iyƏ?ռ"GM!LDT%Nɑ9_̷ GslÑoyR2tEЩ)6- |hq>$bP@$4ʓ^~+WwgOo6Q>HK W3r"[SHIIoef{'Ǚ;Ux8;Zw8]_xP֖6)N:k)K=h&:h-+٩ - ͷ3Ō!͹Ç{ nԘe ٤^]UMֳl9爊:[!E;BS߼=:VGR3!)+QL+|N1GX?^OI{`,OoMQ7y觙KOsGkqR]53Y+"՚P!"6 B>Ts(gI伓ӒMXUg\H*56,> YnlH 16E*<]^]'ɧ.n6W7m\SO\úͽcU.7d0m--5FcLGւ0 d`rHSR*)jer0%I1p p*d+R%J -Ud+ 22.0l8=Iȫ-]woֿfC=n+ECW%;OЯ߭LM\LlZXN.Ǔl'SKtN%%W3 SP"'$(IJq@A0 )rJ((^drZqP9fP:\8ʮ$Jd0S ۠ϻ819ttp;!4&'73MyVՖk1Li,, aQf]Qԩe+zB JVGY)x=3?N$Aas@WQJKU/>Tavܫ88ί>RoImW&Wgd/ok/],oM{Kݗ']ۉ5Zp1P&фT C45Zid_u>' 4Y|+E: HihBO=u/-$Hku\4@4Ql Rqp#82 Yspp1T43(ލΌl\->̛]gOtdNH")"꣡ {ΗmP B.ߎ l{¨7fSRagTt7v Y +ayD7j8;]{dJXSAm;`wY|VMZ%[q A\u{,-s{91'qT!+34*:^TёeQ͙uQATuʨTT' 9hN9WbxGHq`|gKDpsDהVJB B}aJN1̧_jR:gF,NT{m/g󈷥!3jz[YGX?,@)2:AE*ÒJe T l%o1FEƌg` flqk6$Zb();3cKRRPFhlhΏe6\&004kؖΠ9*$M zQ,3Y A"Us L3{cyX14ڱ|qC};kx>(zzWyf|Hrm>a=-};f15M>=zl1__\}=cnBlC4*A'NyQBO?ZI-^fKuu]Og#d`J>6~md޽{f=}>lrwzk;m6Gn:Bެ_?OIoϚ/VKe6?߿)V*k8eO V#A@ٛNd lf-q>lˁ3I$E^8f@$֭>uOڱ{:{q^g{<}{?ç7_KLßv/s/g֟ڍ0~VzUηs̋2'4,vɐ<R;\ gO2U0T7`ni@Bw ]]EgWr[lg%!A ذF}{0÷ Dcas@mޝA;g+=flRqqf>{2jqvyqziglMrs)}+'zoU{Wd1J'#\I;{WH?Ubo : Vzp^n P/p eՓ>d%6HtfS4ZP5hK#wV̧EE BlhWz~v<]NG;5.?ZbaToGnq倡!.<)} h{f]Grv ܳH]+:b^HV^w22f5߬?7Wgv3S0tqa}rlICIZ+z.蚬*KqZ&v8]H[! -׫_߮1)g4î=kOV_%Za?\m~ݘkS="DhZ+zH1Y𓭤~mFڨ`;]?)5f/B"Qt>\UZQϣV%U]a*=Hw|ۤWZkɴ MiZ6@.7o۱XRw+Tm &e76iRLj9N׻Pd0vD4cv[\ot.Ǽbh!SW{pCDKF}W[fܲcVB\ǽf{9QsGM":JJ1T@[Bȥa1N{'C;lgw)ZAH&uQDsG~Nj'Si޲>"Uwڋ`3<%cNօM,ix_O͹x{5AҬ*ҋqJO{k%pN4dhc9\2'%d ÄoaJ<,QZdjڤТ-SLiZ#'Z)|0 4fmtT9h4V\jH)FxKdֶڔ^* QT3j,tROI|է |h5ST:8S׾"fnJ}Q$\ P8`RJ!B QȎўD }m.5ˎ0AGxPnj ָhSk|Fn xJ v ,*60:-;y9x9GDjR1L$[`yU벫J\,dp/.Zn@kBhcnu+yXLāNSಬT uWh%Wk e :om0SA$ ED&X@E"3|5gxbGܤ n!6K1n(ӌ/%RC%;&|B(ƍ):^1mL:v_@bUPP A\`#)#Ǎͤ5#%h~GBY'o& Jۡr{Fj}Oʺ@J @/[ܸ4f^n!GAWPHdAQh8(Z,  ^ B9TКeM'236/^z1#.EV̺HNa13t0!bEfCΰKgyy}qvNCtԂkW f=ft$M[oP\ፍ`Ӧ8Ҫd :A uWIJ2RTxmBV5 e8niq{'e@YLZ[]H#38pGuƢ2YЩ֏DC}^jJi(@6T H7VEhma~. O&!z*4ʘ} ^{@#BK|v)}ю߃b:EjC$*KtPK|̡cLFuu ((LEQ{Yf}Jh v% Aܱ",iK^Bk3i NmB[v#hZp~tAa8JZ!k?n,BL  35)JL `?A j@PqDma%1 ҰדKM"β] ʉ6łk5JZ!Գpg#NGhTf%ݢZMުhQExGorV 4/^q5,a* 1dkutzv+ϗv%.gyu%۴gq\GIj pqu*ip$tiG~G$vGNd4Q]h5(U/ yHY=yp4x&f cN`ϟo9)͈=HO*LF/Ȁ-ʡmM1WdsC<ܠꮋs"I'%\IzULtP%(H Ш3)G!8PqԲnڬGŰ%4v*'[W6b]M57Z`'] +U &eڡPʢ#Hmr3u#&z@U4h =JW6"R 1tFΩm X{uŒ35ځ&z`IQ{L(X*-lF]Ρ~\뜼y~5kW!*Z_=*ƛ LW W9ڢ  '` aMg@ 2)ip#f Q%#p[WmM\4*w3,H&b!C1 /VR!T$8tS'85uvW4"Z<!G /gqryO|y-Mvm 5;GmwU~uXG 7GTMtEѭl6㳞TfMY;*^>)`S57J M{^ +g%cTYWMY J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@zJ mr?J 0{? VZJǨ ND@V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+J I (`Q{6>x%mHV=B%'b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V=^% ZJGJ 2f-Q>+`% lV=F%fb%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V=%ЇGAun;B.crvyjƵ#txOO^{ܗUA?Yj!/go >dvZM˖Ξ醍~ XE%]557ZyK ~$:>=u?R{1\cr +fCo E>`i? Ǚ? Y火RW#2\ul{W08h7pE暸/pEzኬI WV/i5UtzFa7իg?Cxlg{|{/?ÙaoQ$5o:"͓ aHP 7۞e>Oq{ܬ;{f}o6XeJ/~Qe*Mp ?}/噾Ű AWibh uS[L{:Ugc6I$uqFN+17 +]}X~8\`>|t=ypW vi{dgeH9D'[Y*R1i֗Z+5.{mE#{;Vu{.t}o85{~^|k*?OGt_gRБ!7l6?ß86t2/7Kw=ci9:Pݭds9H//cY>HxanCs̭}yoծM;O|ڞXmDOgL#<:RS Z_~vn?^/~kWµ]/ u~emԩyp]/^oO/nd;\b0s;mw/Y-u WH\"֧cʚȑ_aegv$#aoD?쌣'ffDZ mT()I%aKfD~@BA`A绌-׫lph_{YV]h| feAZHx?IԖbMVY潳_i`FEiKf2L].bҹn1lЍ1?G Zܤ//8u#?h YΗGM9twrg|4{ͳA1WoV9 Eh\gB>ҟUp9LP,ӳ5vw"tX=2tޑRmcXÛ-ZVF[T׷F k2o~G[dTso׬f۔&fn/Um2#&:c <[Y*,F,m l0tkףѣ+Kv:U$ umK}D+FldxKH;bIabm@Ff*._VcD]4 ;Zal vnXVbe] 1;ݰ/_.\8Ə8qƵ=B;5[ܵW{}E0m݉͢?mUb't`{݉k`#JJEY'+`nLqBAec9iI(saϹso]s߄v/D%;Mv@:(} )5)&2c\i9g]$5k (PAFkn5miߞzv pb:D+'1 {0T*G%HǤ(-D(AzVxE=Αzo߀*]~ ]:FaSJ|d%`e1K7 NzAJQ["Iz7]x/ v?۶;gh/B gKC'u9<:MXb|jUs|'*ûG1ϳw'@ EF2Kp>pI cY3wWY__=l-T#ԣlj䠹 BJzYJ~!9XZ[(\2QO=n&NvE;Iȩ!*k7Agea~t薦YrIw8=iq HM^)؄Ԉi"ipɸM" g85dz֪_[M]i/{ĥ//$%%~X !;g J݀qRPd96~)_+QX]Fhoex0w~2S%' IU򘐘( &aH1iQ+EVZ*Jy-c&w߻Ɵ 8i *+}ɳjL;`wj/ҝ1SXeĿ'wm f?_Uٟ/W+lvOjiRJ͎2"y Z4 {~mnR_^%rqiJbMiD^E9/y}]57v(w9L"< e)`*+W>10dE!W!-Ie a>`h@"oK6Yk'JQ81ڈ;5nx&w Dž)J Yȴ\#N@6ZHC*O:j`'{>URJJM֧]N+kz M3yLIkYc| Lr$u/#[//4H.F]5G5$Bӊ9sQ ]&+ҋ+R`%j2;-sebR:{Ù8ò`\J{o4"+g{_w*6L|#als{.'W:餬~٢WFWuST';mKk&` ]j~%b_h/$㕉 htzϽbY p١IX@yUD\t>!JR= Y "ܑىY`/AdjB7U}>{*XϚRk v;̱/Htm:J.r7T JR؅]J)lV0R؅KٗKaK5_ f1ڻF>_Fq h9jDɢ 05 S=Z~&{3\hwB M M,{H)H2>[9a\R1z4Z(l],@Z!emD$pZx[E#g*jyZ8 >6CXYӱ2fa=SiTo.M=Xܳ\^oj#IϴHCxoRBF&g2`5*;G XŒ"2M6x\;O-tzC!(liAPQK0GV;M;5x yM'ys{Wٵgkh7~#6¬%i,QCY Y3\#N3$*Xq:Ć"H (P~i:(bfR2+) <RLCi RBeT,i(>># }$.2Ct&sY62)SG;Z#^%O{[7txZ)DŽfGZV.JQZ\%#PHIK1&S7&`}M]L*[I+^~q !KpR?]#[}M-MR^OvU5991VV=yCk<~mSav$qiv$ig)7R Vؖ/\cVfCk'm6[]@猥` Zh@5 U-&#ĬX̒ 2=)΍ 7ڜ_k9W3ږ;-c=_-&$m[3Q{qN;e8:S퓿hD_b̜kChE*:̉/)H=Gx̢BPtn̯bJi`)$>d_,%p,b19ldk3eZkIGWvR#?dsViL̑_fNrt"IU`:`Br  /!΢.IQdO ȣ"PMp5vOnt:,=îXjqE--boK k ^ @ i)8/hds")GJXJİYa@r \L-$472 I 0(B]=;-○D:]5K^g5)9.Be]}KM]vJ05_)Aaq͌>U2v9bg~jIǩPU4{ &6hGïylȣ v-AaAp}D?яq&Hm}G~ 1/$k1xaibق6 L=.לY&-312_ ǭԆҟw+Z]9ZA |fJ;/}ѯbr(\ -H轱y4(Deb]d&$-Q}.kr7< d:4=,s2L HpFerm\F.y!2'(1:eȘ8,{SwQOgM=f8_ry$fg&G+ 2IKz7Nhbv%tʪjZ~QVY7]:{Qq(!FvpA#/0E7=$A$~No܌{4OFiexRS٥J3ÔfAxDDr؛ǛSBV hbi=T"8g=$eb,135^6Bѡixr}Δi!if`p8ZĦp>m1gmEUmƨL>Dv yO wNHf"-d{d.R? K w$vw4$,X> {,gA=l{Pޏg~`4~jm;e.Ci<͇qvlѻٻF$W 6KX=1X`gvm4T˭&e> *X,VKbȗa^~T!)Q$DMVZirz lN6 5yp=f0w|ji;* S[L b^jI%K/ſ=1}oQbYGgv HHѶG&<&ҬX IC߻A.ܵ^Ӵ{o'2遜0CK}U > Wz,T=C@!7'ײeX@ptNTԿQM?fiΤ,*p$@ ɓ}iϚzO ON>gx|s||y "0{dDt w{x '*Ax;w~:6x{V57Rsj=PVl=~<1UO+#6~lч#ztohoz@M9{~YvkxC`(Xᝰa,vRZ2,֣ü}<SCZG@ Оb` Yl#BzI](ga~{=F Up%.J)Dcb%HOlIY.ЀJJ-ѢԩWZL1œY..>3饍S<)Hom`eVmziw^`kL.b֫Ta#A29HysQ 9v>iRsӀ{T>-y< Ţ:8G߅I1vW|ԥӢmeC_ϓ+a*y,ƀ~40}6К;P~uuӳrPQq[\=VvԴiq!ۛuoPΞZ͗mVQR!J --% ll-76/ک-?7ć "."j2 E $Xh#ssH!?q'$իm3Wo] r\/qwi/,pQpEVJlFr!4\䗻@H,`hIǩ~0FKn4yt&>B lh[wn(EL Ȫ#|)Rgw{GeϜɫƞ .+ SϞͭM~3ou'/U륄TF/u^4ĪowEX}A;PE3j=cQ/i7Uzҩj].>N . 团^mc Ḵ9{ :|<%8ג-3XF~r39fˍݭ yfU(`V($Z$3GCkMpΒCQ{0rHRIp/sCP q;-RLTGOp[lY} eaE4:νAMoo\XbkO}p7^g;{S¢1ʮ= RJ' N%15ufcjxHXG]plB OQc*5(K)µЂM0{+Y~3e83%,*4R9:.h(IPN)F9rVZ)l; ʖBim.Vb,`. å* XEdP.!"JFcD4VhX(W`7αA1nI4MfV"pE`>#(h?αͱRШb 7hEieV> ݁qqPSGև# 7䑄@J%)b+XcqV'shnc{rh=Tc T{ 3TJD`H$&hR:0(P[{HxYD!e!x0k5f,`xZFL&Z i#5fn@ A|=\%ۤ4|yuF:O̬%>_+S 7l+iYfW ]ym":{5;I{>j3]wuگ_GZWJ|CА1ze>r6&~컺Ǿ@/YJG@*DOZN'a lSv50EI%3ȖX?pȲq` нܲܚZ0Cu:*ˌti^cCLXɣ Ü4AF&`JG EέRp:/A[R.k%OƾdY {B_A&= i3ݸ/Ѿ\v4Q95(?{S}`9MBIk/uvRqXt¿=hC<e,!]Znu '1beP6U{,XI9L, r B($<hs:FYDZ+I]L{,q_iy5R`yjL>鶮k~TcTHʦ"@U.-jJNJGX@QFt|*lz -xtP1ҕX ]`x6tp5BUB*|#+%w:lOԈrYm;M.΋:oe-#0 9=A_?{ULzp\bR w9]h ͖`XZX:NpE6JeBtB+,}<,N@oW-j؋x;:3I[qz;Q߲QNV5viDd-X}vp'ۭ1Pc:_ao5L;9Do;ahU'{^WiNLدtVoUj߿^bN OsܐS#KNڒ-Jm|,ӁW#]FVqdmbQ^'huFY)X1r[) ]_"zq6+aj36B0] JtcΈ*Jp΅}tP*6!;^a'թ&467$a$y8=HѭeI޶ ڋ/WV* ))(2!JP10U9ç(pNdQT}q3H -C愲<xjeNt Uy.tUBHWGHWLseFthW -CɑNU~3+Ly>U;UBK(HWHWaNrX糯p%UBtut4'&eCW .wGm$r=J ub6t`DHsrN)R]zbqvk 8e?vi4'h  hP-Ѹgyx;n^T--1՞YR-לaD8`e3웡UZ6C9|$5iۡǘJ2Xl*4UB94z+9iWX|W\ jNW %#]!]Q8(7|pfp̅Z͆NW">1㻺8uM?|NCXOΧ7U-ʪP*a+U*UurR|yQKeIqU>xMhUu^G; JRwT^b]* 4yqg#`շS[pQCNNI3..gԤIyHn!NPmR%E'rTIKOz:_ϓi4JI\u4=qBbIB[|xFO1”t߰hw/x;=-љg2v=j}ʹff 8DӀ`)7CIHGHIY^KOyjTFSj%BwuLU/03^E7Bu %W9vyl%.^tm]_N*cכyuWTB0Kmw-mH86H EllO@-)RMR7Q|HQ!+"**K @.f ]`MvZZϷsY]M:f,.6|f6 6Bj{mdz6ݾ]? q|O&θ;'MzSb^6+A||0r[oyӄ=I/a̛ibZfپS+t~:cZwzjwVֲ{bʽO{KnH1mvgr=sq|қ U?-[ۼxsö[طݹ']3p,&v ~\s tBhze-SnI[8(pJQwV]9uuɬB88l(L_ $8<>5(d{^vaע=D&g#YFB0J^CJ^+ׁ]Eʆ"G "'L I4'"҆ $%2Y]Cf<".)!C2\\^AV{5bL.3?~>6poꝴN*NމٲyfU-v˻S:З'Y(>Ybm3W«~6S\8:\W:p;:a5k"%G&oR|StQ%Q%qJ>f^S. ᜖[ lFaDYu I[L4}|ϔc%t:)fV[r+e{^5XzpҎG7g+FqSѯ` !A\;ÂI9̒jU)VN-֕aaWP$s~<ٙt II٘P\@E/ 2RrY%XƴbQ-YU<'sS9 V Ɍ%:J] /erAX0gƬE𺱜5#r߁:1F/!KO2JQSiZ+PPHA# ,,l<2dyDܜWJQ?AO2$dI(>)ty ,8'Eu/6RpYGR VjbUS4& r WBNXJ u<304$J`97(ϙ)Nx T+tqz^ k 8쮈t}nUdO8D=D8Y$ &lP mӮo'+7MR+O~vR mڨ,q gg(^Ay}Fk$u䃖1 r{x98)8fQ[ UBhu:QŅi4=7اe {= ,Nv;(Jb1!z'Dcx4QoT8W3N¹旞?kt]~8?,G\O\3W=1@g;;`pEBgP!אuRPK1 yQV+YYi%V>@AmwLJ\IR{@奧80ZYT("JLI@äfIgZ9:f7ŵ3VO :yXWՇ?LUO$3)Yae,XZgKtŤWygBK"k Ax }'ɇW'.r`~kP/xWo>1Muy\}X|27 -3*: :p!.SQ - i4'8@.2S+ FkKbD'B A+M16F*MHڇLk($%ҳIEO9.1b3rcD= $$h.,h!9| b烘(gCː.Ĭ3>N!_N']DPj?B%]Q2tN.`&џ ~~@=q ՙ fC.' J(rUD(=x& @*d Td-hLhr(נl[@:K$ag3r^4!uWZ>r]F7+"6oQxzfpXٚ|It;%yE*Sj)Q(2"IOuj Mإ)c6{)f kXOK2L 1Y IV3^DT$9,;qwDΦEG=>=)zb,_ɳdbT{z̷ϖ_]C]ery7bh2CJ@6N8 ưKiӡ:Gp(H `e.,1/# y؟yٝ4RBFdcQ29'(kRGJ!F/TEyW= ө7#+6#*|L{||[lIWmK$db*[XtIz. V. 'b)eu\[1p^Ze (>eSPo#/A1" SniXI9OHa۬?aiav{R,ɑrtM9GOhM~]Z?^qwEkojI)xMul:_JF5+В^cTX0HI$9LJ՘\f{l T3Zt@ -̎oR 0]u7@@(Zkf֌J3]،3Յj9gՅj^%jdw/ ta<~^ؑ1B22d!$F< I$ݶAt .pÔ1[զ.Y WB=> ,ZsXc鄶\̓Iފ;NEkAk1,<2"y z6hA؛@6Ŷϊ.F{Ȍ %_d]Ô%[eYGkQS>e5fևSEUhfqFԍ54]`A0),";qj%e${ GT @FםM5Gv-}NjrJ3bB6[tƔؓV4I:P3r"PG֋n%\4ٌKՋX/ H5is$_QeNCB!p>AusœYiCX3͑,{)/eo#10>bx8n~|Gnnz32cA".k/:"tA!wLС X' A?hУ<$L'|-.\Oz7XG(WKq}>ȳ B u"Bm"Pq$I&v >Ǯ(RXk̏Wu^}FiɍyY[GJ㗛G*) `t` ):vE@U+ۺA#0ip$Xr$1!R F,ǐvH".%{issa74kQh )ح /Zt.mn}zVt"ըE 7?=VUKz>%wu]L *5QN\?MF˃LgA9u;xwpt;pwH?Miځg]4]5'jQ7qmVwՋAz]d -5FTa>MPb:hDIU|rsnMwd%$+f0eo4qFgFŬM86Ǔ͖)݇nraz%=8>#PvR\5 8wDH eB+w%ޣ'ç4Օ+Zٚ<|3i{>Ds7^&MU 3N5j/1io{Sz#$hc|&q|mg)R(R MV7f9K:W^dV(oѪ;{Yje2,遜!3cmErFz^=C@!77c#KYRqvp5r}9PVD'>H&9_/o3w] YZ 2ҥk[7¼>UqvSyWo;3#13?BSzbjpڡx2kB6HGT]\(頪vrmY&oUcbY)&&vBGˆ-v^U,;&mlDn&.Ք&cotG8$\ (SFE0 u ` a6[9u}}-y9u!zAXE/I*" )&X ƨU0^+$,n45{vsU#\,&C)Ex`lqRpI-w3ߕ\f[nuR zE+ઔ+yu0"t5|d 7?%i8 .ؿ?0 "a(K2Kü( DYxva.yV,kp1?^QD3*c9\"E ) ?`ALZ]P_\]WYu_|UhdeW"Z<Ct<+ ^}㛬ܩYKgYˤ6reҘ}M$߀Yş vBPj4 RjsfuJGY#ldPmfҤv -OtAhQgZI&1T)b֫ a#A2?9HysQ 9X>0l=꺫 OpCBvܟ:^}%}>PXwYm풺/xi\c.&OAm\hsCOY׶yUIݿFi˶\׊mY[-B[`0Ԭ~J~;2K}D.o[ r@YNDփu(ޙ]Hע cGFl YeZ4@FCӁlh`b`1CŒ[Ū7\a8s4!# (R(f0 jph{uf)1t}S#=02zĿr#Y׭ڹP׎0RZsWYOv]+ 9ZYK0X`"k"QK`ၥ1j 3ꄷې㧙q :1ĨmRhD& ւMD;?/k>KCMfP9Ak,9u#v֛4Tqt!5}M텥<{9z"QZk ̭QIIf`)֚rϡh7CQ{0rHRIp/e;Q!(8vН)&8 ny58۵{c7pV<9k&a6Pck,o=[u;eC&RQTviR:IpJ'gYqD>wj9Q (WKM)0jZFe)A@Zqf|s58 -*0S3S1Ȣ2H#!Jv+-\Q83HRr`)46S@لH(|2vzn4JĝJLZzYQ9xSQ FP ~ccbOCeni[i?pfаx@*h(o摵݇HBrc ŒL+Xq64i74I64#m}_{96K-N.0CZ@TDb8^*ϑ@ / A(:o;cƌ o^ˈiDk45[!--VYu(ھZ#_n'uGUhPSh˷_G-cVA[TH[آtN]-[VUOnEU@'tk 7B+s4%tIv7jv 0ோI;w5-m#7ڷnG5n#(敒a:lot{{m2?fF}[*\{ եh.( z4"q.47ߎ-D7vYD9G$MFaqFoQ^FnOt:oLxS<`s-: A &W`>4Q;M˼ $QEcxˤL] Tu۪v~PN%ZU'?Yr.`9ϧ؂Y ] 9Zr;Af}^)KamY2ĚZ9K>i9jspɚtPNRI4Q)M$ Hh9zR!& aq4 pvYnF@aјh#9-r͙A6ƂtD ,bpek}4Mth:reF:SLS!&DaNB!MZ@cB)7j<[֑t HrbrX+/B"hY+pCmƆWA;}o~5oX EpNc5SXXjDxXj |R 9ZjTqDDW OfKu98ZNW %]!]1=XUz]%V]+DHW\#U %{CWW OhI絫ΐfg*ѮZANW s+%Gtҽf*Ut(%]!])X9\\+Q΋컕llyIXW/dt2/z wpT\1~If`�w5BqMkϞŘZFN M!MWeT {63co'9Ό=8Kҋ)^I2jGnۘP{X>{MAJ=Sc"Ҽa5L]zgE3" -|]8n̋lvs}{ U^΋)P@U+{嚅WR4Z yp)ˑV@GSJmΌ6>)7!# tVFv0MQnm(sl!ɭkrͤ}Vf{'| V Gͭ`3bzDWX\y*t"gPxtutwqʽzDW TWXwʮm8t,tE8 0A7t>kу*etP 5e(#U+Q_*՝׮%|s+)JT);7q2NBA@|2,Sor2meV-;S?/o~_.rtIW8/D}TmxqS\oOmTi Q6G#!B(tw<{ճbW<{ԐO~㢚 gҤUKx> -ũʠ?J9*%QIZ^^f?NҼ^+L?^ϓ'*d\.z@hn_EˈR;ٷ9Va"d[kw5u\٤ʔWTe***P{N_4!y8K$/ݧ26Ѿ{m7iǷٷ p8kbHipcEl^QRsμsl< 7Y hɸӕ$ʻL 4tp]m[+fxt&LDWۄ6j S>Mk_Fl{-a|޽^?*rC}*j ɯ(薅f,UA?ǷO`0Fo^|uoo'~s4uu}}dO#jL÷$=[));"{)8c.{ϏD%i.2 {C\ȥ4i0.lVF0ϿٸΚ/wBkk0 aƜM-Kw0ņbV,ZD+)ZCQhCwD(EKQ*CO8Ќ°<ͣ9C!]9%c^{=$R,\m3֚ l()׃$׭nH 9b{n 8E4CcW}Bp4F+PtbJ5=QKƇh`$t|Hmɶ4Bl@=hÍA' 1P&[Pa 2)M~B11*ǹèi,A7{W LC#FcSHQ"_^7=BIO.sdCRumY(/`TIw |B鼹jVUL/C<\s9 3f0fpfxɖLW@xבlkaM nN'k;ΒcO昑((D|QM^B:ǐܚ`/ODH5/u:,D@K"XWk(mE a|7!'ȃCKdn ,(2>EjZcAnaGhե;P%eohBJ6:IkPP[ q=AEEg; oKs_VvV %*YJo*/7J}9Hmatya[@8Fn ײx*.t iWZ YZQe0\A`T}"XҺ1Q [}!rAESEX mzZ $L.);O-`pG" +w0 ȄR!}M(pm RlZ, |"J}76J]-P oDx,8;eE#du=Ԡ ;܁6AmTzdܡaS`buchȄv%H /*j |uPo-t,ruU%`J;0BK.}#v3yS>i.3ďx;U0طGǣ|`hmFL`-d3x:p2 .A[БMg%s$%@ۈAhLUcXuL11  .؃JH * !B;F+]Z+c.|OyLPBB5njǣVH܁m CK *YȩՏOßH=!Vwt@vV*ie ݏ?߮]EMv!jUPA+tm2=zuiC.-rXAЌwKB*q`$#hn胼zD42"<5q[SV ljdž@A-DC7HX] @.m1U5(#VF/ֆQH $2Бx}αsXjCUIbe(- A5qwpD5x>΃2,b l"fThE|MZ!]gѝ]5M6Q,άfv{ EZazkJ<BZ&A[vHf8Y` p_8*! r7 ]el8Oy-?\t ,>_n~|{qWgM x@ሳ`3FφkOaj0P𷳩šbjѭk1Y5Üg5rFCo o1f P..3zhʌ42f!)a!/Q{]˩4Ts ]a727P;&%\!ݹRAH`m! PMUځ|b# oݪf) +"⭴nsJmXO wt(NP j ᅣw(Ux,R3nT],k~xJS6}V2=rǀ:As[,+5VzP[FR{&=,1ja BVx撃]YV\Zu*6(&)Ԭ7\s!-,pF:i` EH&XkS- 7X bd=pe&/dO 4%尀LJKF$$7XO(8qK]RZG7 o:h Näe1[.Uea4&C,Z*c14PzQEp$y`xk5#[ U\*3B׋]aD이$1@AAP] ~\\]}Y xXPp{<9x9#˯K~9}׿{wVџKЭ(ww^|έ[Iۦm^7b= aq#K>d}ocKnݣ .6Y*>rR,N1^9?Y%iG?׌?rN?,t/(bzilZ6ѻ.t.~i>e[M+^/~l^KZ٨|ݥtO(w?jS7[u[1_ƏGXǶIP d/._9^@U8˜ g/W0>$J 4tH=Rׄ""H ""H ""H ""H ""H ""H ""H ""H ""H ""H ""H "9 $3 uʄּ; Fw&W1{IkT=;YA)"&[N$@oFuD$+a@oqG«!H ""H ""H ""H ""H ""H ""H ""H ""H ""H ":B=$vb8$~`0$~^g'%8"B""H ""H ""H ""H ""H ""H ""H ""H ""H ""H ""H ":^HODi BqpBi= T8@HY @D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D t?ǜ'SojZkݦUB Xs/Ozx!KYi.J \Bi=R::(;M:8n@=sUXJwsU׎\ g4+؈xWU\#b\K2WGhc=$s;1WU\͆bnq#4W5uI+r5sUŕz(JeUҦ;h4{ \UpA$Ҿ/rsU]9J \B*rC1WUZn;#1WCJ l0 UZD*%h2Wh HUܶ ҿlūɐTZO;iqmJܲ&M Çs_wۯ?p2.&|clwT;sZC>ގk`׋Ѹ(kOR~iujZs'$bYcq2?~ئ^^럭3/b6&y4ٓ j`WZ4&T'^>a7/oTTt޻YncU(k>iX]w%G}ʼrfkIdytu/תJ0]ԚF;?~`e 5n8J9`zd/jNa,5KsOiJWX# xSiN(̋Zl'-{ݴ*NNMZkPq]SD9@ nĠ澫f8KuU\;:VgT)9EgOt'^O\y{I!T=,{+MЦ^=XG``R\UqZwsRrN͕4BЃ1WU\ C1WUZ{ }b\i2W(xWUZn1+'O+ hPLU殎\)^\g u \UiJiȻ:Fsqn@Jױu8`W fJ jG) cd\zݐP`Ɇ Vq*wsUԎ+QCP`ˇ]UqBګ4W`J.*ħ9cIYrB|ϯwqv cܨz"j=j\PܫٟXWk Q<EMVx>:!;eovmalU2X=}7c@2F, SS25gٙrlDž]+cBssՙ+UZK~>=YOqb]?g\)(ף=2#D*Mt42kU[Y [a<-y[ҿXq3tgruHL{W178Mb:vM;ΗO'X[}7oC~ޟǫ,ϐyBu8\x!}{d۞.}ǯ뼹H>z7vS:㦣-Җwێ۔gfz~v&F6\ӓw2Rº$Mtg4Q$f|#gkh6ſل[avj~Qe~wi YXWvyLU/3>/5 [:N߾w}X/I7eݞuY?rWgN}nUV=PRu[4~zpqtggP춯Ǹ~>½#kDž Z믓׳&cz.jXip7p'N\SU]m0wVe_]`_ZmwX^޴s~QbsY'QsAhW= 1hzcWA9؀e&J)/ɫRRQ9eNVwW5V':f[_O05N(h$XΠaZT8 _3=6[)L̓dy|#;O=2rP,0"&X<&^Aʠ$W)k\=VEXSl15瘴S%h.dcN#28MP04ouw&p=~<,32*(@.ǔ&fOG;%|_M;C֗_6~zk>v<{:(n }UԷ4렼EgਗjTČq3j]{jѦ9[D2B%ing:`ҳ3n+!Z="8n!E9YTuL PI,\xOT<+^!vcG܂ B& 'lFgREY`؜:dɼYg?TͮhGnv}z^UdHaVe[o5K\ζz];wd^; ۹wtrctt :pLD"n  홭Їm_/_/.ƼƳnLJ(ZMΉ<'Y*uxtAi U&: M8yo댭ޕYQغ:: BE-_H%yZ gfW CJ9PJݻG{JӋ*Qj&%k +K@ /TrIZd IŅ*I]("uꎢ/snw#e6X^q\}d. Kəe-(NUYyf@MݧSѭo/ʭK̉U&º.JQʌ"TBġs^&mܘ'X-_*VJ 2C@%1Bl|:֦5îyi.[g;'[ɎК,f_۴GzѴiǣ'bxq:A6Å=pmh{fQL`Չk'mX]H,2$xQ2hi6q#ݫLƛRWw{Jr/rhRDZlop(Q/CR(Q|H̀3 yF7@5 UQ[?1IZ͜xySy cZKE0​WY,oF6t)|]69-}"U2 x\C;G _dFz7x?>r S-ӿ+nU\ WNq:bl7ڷE~x ]"SSҥJϏ2XI8Li5DOڠOzf)^2$ʤfaTb%aHd#qȹ }OgȮbUnq*zM|2E_ʲS<?ݲ'8Qh ,eyM` d*_QY€Aҗn< (K~CO) h$X؁FK\&KCm&㹅@:"I!G+whZyA.p"SQTjI/`8&G|ҏsbdBb2-M |cr^3#Q 6, cA4A ms!X+ TlqBπ<_8'΄/h@3*q9Ȝ P"c?J` [8^?3rQ#JI}ZF= Fj*$.8|L qE(ƠSVè62U vޤ=cvWQ82ͨ4l7 R /+ttOYZ|0P#: "0J g)͂&D x`0'mtÓI п,h,N'\яŬC}Nh<~:ݬxOi_eYI}Z4ЉX|mʸ.1}<ˬqFр}.FIߎ\o4hGYXZ/7"ĂX@u~l|73um/Ƌw]]ҭimZ9uvudICcUO =߰}$]).S=r5an:Â0sp'6,dfs3S|3o9\MW`c~bPW&u'RWz+{o9u0* e /].y?oxJw/lSw eM#jZ 2t-9<e+_SbMa%HE:b:y4F"pHGcgɆ$kݰRe|~d@yU qOWLr),4!&&R%hi32,< WJ}3Oc Qo`uycGo=n zᙂ682U? GeT)eh;ݸL>c0HɄs?m8JK+=<WzxV:zo3ur6^NzWaVV}uCٝJAxV]ѷF֖߭\3Ѷ 0-5ԃ1W/V $vc $/ o4g_)cȖEʟ:^`dSF9䎖h$Jr$ǭ!.m12pXFF8+ǔj % x584Q3"h"$euJh'H$5L\۱a.6m"OMMۭ!o᷃y&}#X [Kd 315ROc֒l XF&2,*@Q4qGK=YG)粥Ąh4ȕUsX:G";%=r١,I)&\7R\H|'dY`q.PQw;1 #6L99"}LlgySBX?9NP^RͻcD1VN<5_\oiK?mSMI(i6 ysvt[8 r-J9Y0ً/FClDQ%\xE1sP)ɝ uGFgIKGG@TeLZx ̂V%),9+jlq|1Y>;nk%j-(W'E{_n]=zwޕ t C$1&K@nԤOQ1;WuߙQ.O P2X!xm 4&!B8шL걖&W2*ȹIA+L:.YaM>GrS[qYj;F3<;1D֑nR1wQgFV6:)D N`aXGc^c[VYܰ4,>r&!^Pb0pKI8[+׽LY.U߽XfD%F?[d;FoT ܈, "Y1f#tGÏ9f=xd>ਔ/$lJ^dCHFQdFTv\C/KIXf.% eCp qz33ˌNmЕp^b1PW|>=VVܟ1rcj=7-h~[5wM[»?zzhEILeFӖAu͹/m{X3;̠b.tsUXRYTs8y'ֶ]5ˮVrsM=gwĶw[úf˟nypN֑E]uwv*Z߻ |2x>waeeq] Kϴ茒P]ߊF}+zs>pho21:`WIB#"ιAF'z9Fޥ,8"WHe c2,KYoϕI&[D^&R"ĜyU.@PFT% tL 2qLGfUVQςl)xoS5!8:19b7Yp:<'h**R1=0Qi%pR}gY+Wd]P;U7LRV=<\f2hѮs0Pʫoո/gRe+,[̺S Hr'PJU* |.*+9FDaJ6vySUub5;q*' ANh-1;)]|0 !5~$y{.Fݶ$PNgBQJ Y,f\Aؐ6VD}wM8V.5U GPyg}6rm*%&4s)J+ewe͍F(pMH\wcwE/MuIQXTf2%@x"t\AHf"ZI笑GS9 8W.So?h28׶bMz<``cX!שc)Vrz Bpc wXL.,hhg6 ؒnTbE>o`q̋&IWoق6 D3} 2CilTa>:_˿c'L{Х'R"汉]g!@FgYf祢}=&K9#]>6*qsF<iK2%{ɐ\d̯O։?A7d?:p6}M%ek[ڗzPaу>+D߫{|.?ʤz3LI)&G/UNaVQ !$,ԖY#5Qݔtw;^\<^(.ǭ<@uAɷx6yH b5A s%R XiJi0$ o>,KeF|<ۢbihl?EgV'O^{͌HdbHX bA6A ms!X+ ⩤x.g8^3jyQg/h@3*i2rs 9@)ELJ`5OjBzFHq|88z$fg&G+ 2IC.nv J1(դ~;#7-zdv[PMD=(f0'TC=ntY޸iHŨƟ׮btNL02QZATpf,hJX9~S1?<\4./u q9]Ox^sJ&s7t"`"%>pų.Z:ꖜۯ$?WdvJYYP/_"EEbi'ʲef;0e3=ӯwthZ]]  =է?a=Lq_W.NS ~{l؁A4.=, ״[O+!1.~ΐ<}SH#%ӫ҇;.xuz]Qno<Ǩ;Y|ftt}K`K,0P_/%';g}굅ŚtWt̐\ [{8b?"67Ļ.9$nh&zމ{~%\]Di 3I]weKžU-zNFcp;f,}`sf}>);u+Q#0zuгEۋtT(.p;f"$4x%Q ȆAF#2YhxHQ;\pLȤb iTzU^OY\PVxos]īIBz>_Hص\[epjn2NNJ!%iNsV>ҍe5RЀwqsS1 ӍXtWG_smۖnaY.c2>fhi҄W&ޢ|xj?Bc+evc $/ o4gЧK-JVUz>YO60U:bZ \6Lܷ1,эVǍF:xt~ N V9ߝ K >^1UŨZYIZyvtE_N)b!6Uz{ ;ik\xW1ٻVܳb' 8jC룩 ]ȕp, 6t6; --!AVهv y|)ԗ N-ͬ5`4kϩ@,CtSA]6삺;q^K?3k+"WVyϽbY %PAR$a&+$ Cnwp[̚:Pe9([ da10AT71w`~FV7YHMM0,7`S8~4?O>SjP٤ФA˲$u%K<25?j]579DEv9 ݛsϺ$T!:;3-搵4ޛDH LX)$ V0]& >$n?t# $xzC!(lN TԒ̑\@< Ȼ{G+_#z\{0E(Ua%wF fB?,k<./;fzvuQ#cG3$*Xt 3+ER'>w PPJ3@iRLJf!:}&:wBW6G%\1$' rzC<Ҷ,$޺(EI3^K!PHI[1&S7&` V$ŤY De@BbTZ:( 0+ j5r6 j:-1`5TAqq}(%ɱזί_]򭯋H<9o#wON,^;S[bd_U٣ jIsE9c)g%6D9fCUͨɈd?1½׿\:)qOfN[VKv}S_b@'˯k쀙`mHV%Q91,5$ዞ`#tfQ!f^2< &C [fi˨!h@M&lǬ\"DW;\NqbEkW㎡hm1jт]ۡ0iޣҘ#2 ,:YgKIUF0C<٬1XŁ2YEG‹kKTsD (I`YT S>Ff}؂egQCшcW(+kD9jQ#= /A0K;@ i)8/?42{" GJXJLYa@r \L-$472 I 0(B9[T8KԁlaHj\^zF8ŵ}KM]vJ0=_)A0MG˸fyx*HF8Zܱ>TMAmhehGq ~am"qsxGibMz(G0HHѻaPN5E62 dsP%E;'*-Fz,2# hH!0iƔeGzYЪ|9FS-'7h RVJ#=)71t7_Q6Tz3tOȥKt(1&AX;8#1JqUCܧą`(Ъ EzKhD%I(gF h2 2Jp$% S)Uc=sL] Fs1rtL.2)LFh-HLǤ7I +J Pi\K-%bb^#&0t kD}VȍNmaQ\̾O(c1%p M deB " QtK3NF{MN!d]'C ^>k8Fw܁d dy>4z}H MF4`xizzheZFf=xd>ਔ(v'aS" Dv1_,csPBzcg^$ew솩]xeH\bzag?/bmhtC/6Lz;f}<4p]z|7x]ۉ-f^J%wףgކ'& 9=h5O޲C=M汹?5l.c!DmFȕo EG+4*jzS(Rq16zMHkLuWqs#7W?MFkD R5oJMFDgsq?Fa;A ٸۓq'Z2a!X 7N eJ$QhBK"q/p)B 5x~S_]ui>ۖOv~Y=7ǒxg^+h(UP**R1=0Qi%pEd٣v i&$VVW*3]y s62j7kGum[FWہ\mN2+ f6ˍ;<8qj U&ze%"tqRPUyln/*{N:b' u0T  eIMmiJVH+P#~&Ir}7qB4px7<["ޅEj 3=  ݿ gKZlGIGf-ڟf,&}̟А[;Ʈh&]jl!7'-U ,:}1,,9ː"1c.bӵjDfOPb0ĴT^Aiy=}|W?ߵl%b׻[Jk6mvm zOeM`{Sf~_ʌEZjˌ :;2c끼d*mL$*ܱ!eV)5F$jkڡ$4&6ӎ,@pA~KrLUP|т9WODG5%ܣ+n"%uIمvKF @X|d2E[Mb4w7y6BE pQr\E% <PPpjⓨ郋6P]A-v̊m9TgjijuTW rIƂ;;ԄڔɛDkMdM:-v#<ΆkbT @cExlI/b ֣՗B޹X@7qV] lRTyyg,dYF TBYdP}*hB˰bb )0@&.?n3G޲W-{[tJC RabiVsK:J'9i8'`[kFWPcAWZƐZK5T@UG5zW)XCt1)gw}x`Kx4UiŒ^-B*۴ON!WF;,2#p @0 `V[o(jXDk/""]|)]"V0,W.ktF|l)ERsTupr>NuN`K V%}l]@jz+~!O/Ltd}*Q;\֩SW<쇺jKAc`A ؊TzIFbeqVrM;vqܜrtW~az)!gAZ<5o"3I=q@G[BGrP[Let<9tmRMzA+2&NWԡ.꒺qiAϓ;ڳzԞf왒h֒%r1;AB>&1` CLF93&[g*6')Gtk0AlMZ:9p昔wckTj7qVW~y! ?Ax ^~I>M]^3_#o#! .n =b2m{7LM\6d*ZvɴI f`2}L&#w%0{㮚H⮚w]5)@ݕ't|V bHut03Sn?~MNHɗ{ߔߍ_]e6>{luHU{G5N[x;Lz񛸠I7i-nRtҬzc\L.h9qHaܚ Ӌq~'u1 "t\,43>X>x?2t`m ft 3ʼno. ̸;~~j݀ڷ܌e(?NC_إyAٙ<盛,43WjQ{ÇN&b]!(Ps2ʂae7YULz9p1 s\KǬj%  هӵtʫ4`CޣS,94qҤPRpJy5wQ7Vӳ7g-ۃdr0 p>}-JNoO~`mRژ٧ p~in>MZR'IٞO~.<O 8/w%wդ}F+yJ -+p!c$l5rY2ij)/N>hGn N߾E*Nt/txߙztA Wաy'_{27v7A)qƼ\a|X׹OQ~ 2B~5u!keWbAs!9yY%klЛL̃nyfpyU<~ʙ;MjOW:c~AL^d*!LMZ `U#A/h[Xg͵h(VSф*j6Q|c2<&?~%](_7h|YҞYb/_yh09j!}ZQ~Nk5͊ڗs돝=qA}|:чJ&rE/IKIvo# bY~}Ɲg[DL؝9;郑3Yb)CPc%M ;T"oNm|z%Ԧ^_tN.Sܡu9V t. Ӳ4˲ er4}nwW󧣲pVN'y2jMqFGe<:Z&٥x\oFI9yjeK6??O/KS^8 Wuư:ie8>Uhl0 +ע뷜wbI7E PB{P,l[Jz؇aa7etAڍO%)`bɩ: *v1D*?K*"j(E(햎ȕsm,@5`(#CfL(U eIM|%ؐCɪZWԮblQIGTM$ 9RPM|%8_]/C%&q!=8n"yȜd GD z`Kո%7l Xm6>doVI!'-U ,:}1,,9ː"1c.bӵjrc'y1mbZq شExt=σݣ|W?ߵLKlw^ߕ l\+F/$Zy>3dtH:vA*j 觶KiqЏk~- UP|`/m3Q9[k)RrQ]hHjtGV!Sh1bp"@/Ԓ/q5K 堗W|IΊ4eܛqUAmS|ZWomZRy's*bZ]6*l_M&c#$,7˝W> PjkS2.&oB!45cNTYFl T]-n"]zj2`+-\GƑ2uE_q|k&K޹X@7qV u{|@Q]U,K1 F TBYdP}*hBe.(db/(Ra0>M]@gz?e>[l9$*2)W.D)PSjiaZAt~2sҝǻ<t5ud5T@UG5zW)XCtDZS {ӉW|9XDl>:b\1$ʌ,(dYmUOaeov,dW7YtƧgƷyd 4uY>c4yDDH1:vQ%.%{l89Hyg`Kgu!Z:[ ^Lƚa\ArI7]r#@d$P#"ܧd {p Yd,9bKKlYBr#̓MV5=kCnt\4K$@04o(P͌%^mHKf-25EYm&/*-0!@!:a,Qi+q6қX7=}bkqpBGKH\bQ. dl\ab*#D+!n9tbV3>WaWۙy$Yզ%ub: L"r`CtfsjAX{6Ӯ% ZD׋ ƚdH*&!!"0R[ PObHѡC | dC0d$`0{()=9K0P_Ӄ!ݐ6z2cR߸1 d{P>X)uZ6X:bv5:-H5V~O>sϵw־اaR32',ޒp3 eᢌ3+%Ӕ*x8 x/\6'IĞs9ksW I=q sODyQ,l~EnRݬ NXt[!.ʜdv5sADD Y4q'di!dM$H%Q%X R<եVLYSRd)!BD_$ kh$u;0EF5$ Jd2PCf<".!!C2@ѵl{iEg½po@ ;7^1^}"-_RoH^5)u @7 h>`B7}5tEJ?d$~ӻ5I>H뉐N!/ŝv}\4 :*tF eS5l>)E$B׮8^8%BVFҹ zܥpV D6A*1Ad-(mQ2:P(5fJJ@Rټ&>|zXI[oNߍj]zj9}?ťs^}ۃOT6Soխ`rtJFbST>Xm_qc)Z75^13 61%zJBwӇy}Ȇu+Aoy"~kds~C"ܡ&kN> XL9ǴeZSJfT(DX0=hC l6e"kȧZc4(Ɣ`'OrHɣ[bf]`flƈ졻+h;j*ޑׇs"j}p| i sʔdp&S &SJgJV)2%LI-Q]OC:Ǽ\#J{ &]t~i1B%g0[V.E4-ˁ--H0IdP)'J#y+N)evmٲBAp E]0.Rz@eTި*Y:& B1[n&^ǐ:j>~: b{DoY't33Q.Nj1KIoOTBFe*!؜I( J!YDg6L2hlZ)yE/4.MR 0Zl*K29 N/PIuǙfl;qWu>˺ꎗǞ} ],v=bX{0pC{fy1/t&;݂yϺ;Fx M(k71uº5SJ@(MB = BiP:-0ݩ#se}i8|sg:҅Ǣ)e,r60uI/H)%b"o~Yp8$C$=/* %IUiamOjʝ b)eM-o(W߰.Xʌ*DP)Z+\j]?8T/Ű4NA+m'[ƥ^К?"`[ϊNmDug dJ>rm7krr#$"GIɢ!pfϑ'kBdzFl!b$< 8* jKG\)X ;֞8=c;6ӌm}a _S_x)QEEkRLrffikr_ x<0> ={HE\NgQO%  ]3 u" m`Uߘ#S6d0$lan'/ }FodXֵm%fF1k7ӎCjOsXB 9m( ϸ "y5zcl:*#ImlƆ a ˉ", >lj攭~YxfG8x{%؄A/k$Ód' ()C_'qD戡S&-*̍F+,RLf]J̤Z`1Z7k&*g~Ց[!u6Ӓm"68/޻/R~@޸ɗZyZWe/  i!dRG<]ŃiC\eњ2f|ϑW6s(Q'~d7,tWew/ 1ZiEB.(&@(Co6Qr͙GW3_Ox>w]|n," @hR]fS ̇EdԅEbk]d !un]Ilwۓ>qr}:C:c:ocR ht & )ZVDpUFUMb`: 8xxK51 9'#[Ai q !6HmCsݒ͓H<: &(6|:lzqiHGS: FXb!&,&pE.OX28s48Fgz>?u=f6Ͽr7(kRL |PB2|XW\&)>Gipwۄ- ]%j=e֠2k*&f+ڈc2E44;J[c :F?Lf5dyǛ}J{Xְ߰v]c?O <'˚^k6OU(p좷roJŎ/,w*l(l[6+(p1^Yn{Vٗr- O'}Pܾrj_g?ԃ9͏-j{V\ hB2/׏mD{9-閮:-^}u-Ď'ѣ˫{xYlꎮ-,α:7z:Ja?Ovt+=;0!ѿ09? 'ߍM|%%h '#~} ˛>5dubG'>q6UZ>C |L0XfK."o 0fe˹PskO㑑mZpw0lBԮF\ĩ?BHso ?zG̳1^_\fQCSV2A+K4F86J9=AkC?t[~M! ߷a"F=5WnWc[U&z5F?/P_FbV_(^]u׮+ ǯ,bް]I\-Ư?gG/7'eO釣pr)l"G/E`I'#L+YW gINXkHtOHL%PdM5BEցy~w ̎x䓟lg.?=TWnfG/_JT}?bxэ[Gzz[zg\ Z[(L-3ezٽWˌ+dꆯm]$FiВZJ&l1GbBلj%~'9f۫U=>YrFճӦƔ<2mcD`k\.IUTR.8"NYK AYuFeCo^q-Y62MwI4 vuiۿM8JխwbiLd Tr @x75D.#bԤV"rb/u7_$G'{VU#>{ l;*OGbn[ώEr?i;k‘IlA'SdNH־eUl"$y_/^/߭"W9 >-[ /|BW^_s_ݺՁ;soXo=׳~Q<[ j}vdϜO|쇟~-rk `/˙h쏣G/VCoPd\4w1 r_MQGqtY&\]%)xdf1&W-cFl(ٸ,#U!G!N(&!S4CA &Z0뉞4#J8"L}7˓Փ vX !hZgFj5 @(Q´[%Zʌr_c~F[N )H6*9TzljG@Ы*b܀^HKjH@*BU4d;yG b߳y~) ICKy*M~; j{"&N?}w0^}bdLl1m5x"=բ$}uIqw5P ")ExSbDv4Oލ${sXc]l[y7:dݣֈwݺ}1_^6)ީhZR'd9TR!6r썍`n;\-gf,xWڣ6%b YƣfsE%kBb(;/ GJ"T4dBWhldPC)>8 (&lgv7`fkzUNbL-`@A-DA0ĠRbb2 Q)0@O<&{QYkȖ-:%TF[*9*JE|b_ 0)Nl|Exu5(85v5T@ ʿAk ^\D""Tbu*Oٕ>cTtbOEݙuqߎv4,f+"3!P @0y,`V`(jX)&kΎ6%@ȾƧ;^ƳzKU5-Fn[JRQT Pr&g`oku=;ik%~| Ѷ B]л^]yӻV=ol!UC ICҙt|/@EJDl}Ĩ-YW *K,ho}>dsy3{1X]>zj˰\#ɤs;mM!Vr&i eLN!17N^C7_Ŵ}DIbP`<&_}ȋ.|6QbŁmoħMeIHXwQe@R̆m $MJ9"8Tն"H`bA~ #8MzPb^4XqJ\RDeXѡ`m,iCAYگ[XRvm*Bj$*@)|U:2 j@E5T'N4 x4[MN2䄩Fp 3N&kiDx5Qc^i(V&هn||nC_fܭㄎ Q*β-;9=blo'pu(.t".npaJ{`={!TD,!"K  D-9 1 L Dm H*>GRRReCYĶѲR@쪎lfmIȎQ%;!b7qvK<&dH!ӏ֠hy 7N߼~6/y;|hۗ')=sKJ2YbM3G«m**g2]zF'Pq@&7 m̥\eCO&noJKt}e]~(P܆۶-K^Z1[3<?ޖA>䅏eֶZ;-٩\/2#XN|9]^ rl~}oYljw7~.o>Ō\6prnnL_OꉴhGE/cB0CZյG;R-!(, CÃxؚMw4ɥ%De%r}u(.iCJ·TSRA徭 .匾Tun C`G Y6\-TFtΉuq*{tzg<W:Y;%g*;1}mo\`SrVgIb`b!9ZVEb@ k Td]w޾R, 4ۼ䖼 \o/d&Om·VQL% kE &T24E1ST0ҢV<[JeJifs5 )hJL13b"T#XN@\Vz!dTQTM^~w^As;F=\@CCmؓ=84O P j}Qp|L&Iƃ}|; >2Ewֺǡ]ndtUhWz^?rz+y#0yxfq^!4-'I Հhٛ!zU <*ZSoPѲ%Pl%94&XJq\OŦĦ"2xgolD!mX*s qUrPt%r7qv"s9 2LjbePhtJ{\O׽>S ?D5rK =DQ+uPT ҉ػ6$Uuw~${&kl,atwU[iR'Rq#Cz )QMǀ%jS]U]*c#|)h5 z΍*%sDK•3d0${rbK8T#ghg}>tҥ=Y]]=um#a휭,tW`h¸!,qb0%h8HQ導715̺5AJ|'= @iP:TIɜ cfY#K3 'K}d A34>fUQљ`TQq!=W e睉N.60jPnTه1Ϳh7{Bʳ2ixe E7 ')ʑ;2 $s luq#*f`C}BϚOA" dWD ZRaȹE7gm(2K^ g`5>1{ugd~MޓܛTGE܉'SoI>ŷ Ζ |äk;0ݣf K.p%8c-YhKi=WXRcUhd?1-TBQ DW98*dRR5c5rkzJ5]X3vՅJ>½E29bٺӶW}n_4hx<}?>yJ#d΢s1fpFQ T]d&Ae[7ɬRV1-jSF6cg<*o"[;N'1u-8-=X8>&9\ys%oPuVn8H9Bq̤XlƆ2Y\NIeɔCB dQ&LY{O9a ꗇ>b<X?vՈFF4m ac%?cF !zN""rFL9,uUQ+}6KJg"4Jt)J"\%SRȹEcgM'\%EUY/A/zqwf%x0%õ5_kACdR>$ǸaևdmنQ, яX8mҬ\Yj_Uh,gb,4A8 CvX6)l!眹s6gx A dIR:yNf1 %Mq,*ȶ`85StHa;F,c|7:?Η6<}{ҞG\/YVvyS#O#g" #r2 !Zʖav Y0l1j{;6àA1d'%. &#IBy: KdE\;U74M<IRLo3vQOߔnbs7]n=aP?Y]00+L iII .JZ&XO| x*)ӫzie2 X%UL%)&eTLJ^[.2dCBiP)zHoT55Ǔ@Jiia$yfsrB(QSLqO6ed&A{i#?jFԝ$n'JpztM[j}s~VdI_Oߕʞ'kV|uMi Y-l)|5vƨ$3jd1P%CiǯӸT]Y(~Ac]ht9zHZ_ӣ.xv>}sB [Ŕ4?ޅӗ8;"FG/Y J'#"Lg/ū?SKi 1p nhtFU4UnT"Mx`twNd ̎hO׳NnKҫ6% ?/qM%­HFjKQ,%1HX׹ ݃/tvpޝ_ʂ!z;~tJ!OTjDtt7 xti6 ̌I|g56P$ZoMf[fod5 A]=ԝ^Uvr6n ocvcvl3y| 5\܃ W~7rKlO2]n4Ql?}ȌZ{ RIMAC&FN2Fq?,tT}z'*Ā}Ӳ#Z*[2Xd (%]fBNё6t()iJ:jS׵ ^WpθJ)2CxgVvMQ6K ;?LI ߾(gx/r48/^JgjW%CeHs2$QVҺKeK䬲|͓˲פ m6P !iՄt& ^Nvx] jX\ҽ%PSZݰׯhRZCcM|rwGEdPG?|7Zx*ۀLjDvn֕qblQg?1_=ϕ]yfT9Q릤Mkd8xό83Ⰳ+/܉L9g٠vB։!2Pfcum0\䘒U!@1JԠ&P)KJ+IB^Z 8ˀL c%c:HБ<&t^gyl j* E_bXckMAR0}{ YU-yi4^<"Tw.R?2gUǙCht(܊F9U/}SuIٵ ^-"{yci Ξ}N'E5Q8c@&xaE:qBxt2%CZd 5gۀyMa x4aW(bٵ6Js`CXcFIZ594z Qf_삗pmhlKu˰<Ƌ<y A:yb*B9eD}J{0́Rv]җ{%9 }>M|rK<;auhdyQu_jm#GEN'$Hlv;`wƞؖG,_bŶlŦe9Vw5Y/OU @$\AYK>^Q(tD ePZ7#e-J7޷?i&a[Hz#8@G> A6q^FxyuʎtQmzg6@C4ЗNXO7Bg6cC$H̨1a,ӥHD!BIC!6'loI;Hخ?l d'%ȵ~zؽO#@ z2Zj;08t ;(GB^jiW7dc=[<|y1|7Wռ VZ'2W*؃/ & \KLs{[^N ֢/eLާ*%)$+/>>-A ;)sRKTjP&1U)v׾+^$ ўXȐ.JʎMiPg[u qGoV"'b`3r=Z\U`l>[?}sl&o^^á Tc\ fc=N P y2aJƪԺolTX_c 4j k~T)5jY ɑuNi&H LEJ ZV)F{Բ-+w4V?E 6^kL5HR!뜵[ /n ۶ 6b2d|ݙP ]N)2:)m ["v}QMz}9*LMkњhZ5ߨjYMb2JKA3 S%ըRZ-5CE%t6r/F9{.-JcT&DR3XTI+ ),Y΢V,cA2%+ _ʙŃU9K UP9E_@F *%cB4fl)g\23F{`V*P$ @"Ơ|($Y.X( 9b0@KxOBO?2Jڑ2`ीul$PDSSqm6C1b~Z͠V~'+Ey Y-( %@ȿIJ$˪ؐ0 1XHCt=M$m O|OH\KH.'gV ;w^#m4E@XLYW)i0GˠA@bcJMZ0< ̆őB$]Di;V牞5c l,|r!|[Gb+ MMh$ڡ&(mΙ#"3kUr cV]WA:q]C55AϧհF*_rTHeBHqj.PkjpEitkG& (M 录,5c('`DDUAof L;i0-q,GOHAEKMvTƗb`|pV%Zlz%hG߫f7S8CGj[X<@S9i: *Ex9ƆΡsʺc Z*P[M)rT$c( Bʾ@P"3ʣʞ$L}1rlFΖI;G+pdlr~A_vd׍ЧX/Ο,lޮQ$P' r*ۜ@ A@`QC ]BF H).4ygj㜗>z[OޕYV@gt4x!(ZohF|- OYٴD:dk(.|C}O7gX W%4M NߙF;do,(?fܽ τ~8n3v]<-3 a_b48]y~`QѶ̲x%ti甭WNLDTbX0 _z.:לzni*uBmվHqlz\N2߫.B*zkbۏAG@[T0jt>o"蛴(g68fkyRIۧvIZBnm%f|]eB-4.-ZGNów-v4$m:JQuS քd9DӾhQg{M#-0v@B'lֱC![IZdBQ/ns9u/~`#R!8*"%g0e2vާ{W[:iX+K.1'#$#U0$Y xrTtbdS,1#6hQXDa (.f]S^1 UP"4AP: ٢IN>yZ~q*4;&Bd;GOّlm VqgM6 #7ߩK־fלM~ʞiۗPkD&I。B$RlNJ Eٞ#HBShȖ? ҞRɱv؟LT zH^JjsflUfqW]ظA>.\QTmv^񆢭Zqd|/_XcTD!D*,.,׽鉅/zI*0EDP$٦ Z)!6h0LLG(tadYjsӴbM;Vܱ/Z[ Z{@7q|-KHc\FPDgftVOԶ_AL|01m|T>Cfd(",0)&u,;SNiFwdrTFlwՈX# x1d J$xY8Q$FtĮȾqc%$T=Rd#-d=ip*蛑sFpyi}u6㒻ElqЋ^1/bM+N+lV|(eDt>FX}f uȃ^|7k?P7ևnTتV|ȭ*2S()߷' ߏZe?֯~xS<@Z?xRvG}%-F? ϗȟ煇O>O>އ` ~=fy|c28{J<ßE-Ծ{Q@EOէBIqA_;x1/Paȟ3Ð}u 3 廃HMjzy}xC_ f4ř{E|rgGs=\3'ӹ}zhK|jo^\2ϟGg{˓<=@?^m{oR~ 6f{epC!| 퐣h˽^[( ľчD ix1\yCR?m^67)A"А<-} {<׮E"`:b$J2CTx>e1j)GaC4rqlZg6}nޯ=ǻ {bB )$Ne\QTp*//xP bxP"R9*X rޒKk I Y2\HzI6PZ,`bJhsI&%cIY8A]6HͤFPţQ5V ٢hHa4# ِC\ )}?e$Ʒn+nMn^9)sq!`]R88ZS&>+3Cz IQCGUhƯh>5VZ1 H9c91&T;CT紦P:ӒMfg=<_1y@7J$6&HcAzo"4rAJo|ųggsXZqxHkޫ,)>uf=5*2fv[%4\z]@UWL7Nʰ‘n4O+xz`>G d](Qz!|x6T\E%'<->$y/#.O/?IWNԝ>鹿h?eΧJiy JpeTsq .}zN6)ZUnkUk*t&xU9(iQ9gዿص]'Cm\#4zOMhwDB&9bΊRu;&A)ȳ=P+οk(\t9ڡpeM >Vե6aʠLn5dZ.`g}`?iha$,5kY1!d"ڰt6]mB-vopvPw0mX*%Qi3A!%u2x PڝŠ\GƉtVLE_/RKr-8|Òǘ$ ԥMD'*AⅈiF(Qs, 4HCMI^+B/sOhg񡄣h7{(vq˪†WC= @!#$#;؋I }{:!${,s|ugc0YI@ˆ1F"3g).dx!{<͙\dfn}FFZ~k/ 2JЯ}8n% q2 񘻠s.L,IE)|Θ ,xv!R4`w[S7aY8FY9s|y*,d>[ KEq0ȥ^蛦Wðeo:Zv;bw8j||ZE'Z[1 @n#Fy53IջƄn20,ř`EwmӚ?=K,zXhRx"i#QR%fm91baW%!E;Z̎?k(оw,<7GغG1b PdM&+7P]IsT%A+6H &qaI4`-f~Q}8l'Z5fd'q8pN c ÒgBRV[1:Fa"(#ۍ1R X1@^L2)Ĩ5p:9%-9JZƋ/| {.*frԕ뇙A'>6 ͩΚțxCղm$X1fbp R^QÁ DEp;߱tUfbc$Yi㹳jN!RaB!)X-2:uҭY)J-6X*MtJN'Hnjf^kNZNnZ)6h!Bje4Yp GlJNStGŀ5*Y`D :#6.âruU2~nᰲڀ$' E Q #_" ;56>v!WmL~Mh+:sUnyga#X+E"шx`9(gK `Iv_gƱ]v^p5\hM}|Ϗi;"c{ ߍW}rk%k.(B d0}0v:SKjnn@9n^Kٷy0V >+rP~4 yb[_NsQQw<;zOӗC2mnsw_`Q'n_~nԪ=S~q7*͊=)`]ÌJR^gqR=UJZsexU#=m)M+.zY[n.[EQy9,O1+E JٰWxQ=* PT7 ᢁD-5|g6 A?WI[@٫& `s\7FE=IQgyS/AJtu4aDܣ] w8>`88nLƘظc#mlЌ;؞;Ѳ-1vNǍH*DHYTSJn"IJ ΧiH9u,bsH<&F &aD(h9VHtcmm¦eccklil8l6?ݏP7NM˻kS3֛Սُ}m\p1h`4^`+k9)Q慴qfIXIl4ZSPW*sx>.(@K6op@64rYb<7mhzYǭy|E>:-?ŶhSS d@5:9 828m͒䑳smw:@۩0h(y,$!$E{f ؀n,w%) ds"ʭ/#!IKqJO@BG#]ʑ)GYcZ|r1Ұva ĘAŽxm%G Y>LJXDTk9`uc;{u{M2e;}G\.r喙$LҎ[7^sÂ!ژJ$U5 bж6vim囯33>{tfщ3IK$t&A>Dp?t4 K\ޑ04АHka '=6 gP:ϩSQwho{}jvLL6JSEτV()5έ!Pc!šN Ub`"&R0cb)t&v^/ݼ'݃v>wB[۴]z(FzV \[{ ǥf]3+hgr {,׌ZCġ_3T2]3 +1+ #2T>qURuQFqa.>nz j֏޽ 3a *k(G@հTlĔZ:H IʙQcͫay|L+*OO2^>F1h&)P'}cOj@'ٝH꠴Ȁ 1Q1e$o{[Re: ؽq:(7NO&ca *36VJFZVv< -ZwM J,>̊SPw.Q#gEsN6Q-Om w.=P_raţa>W,[@揲muە:(KI?Nq6`EE.GEɣ")XSy4;Z@XP 0WSq]\ǿNpO[/DboWI+8YV#y= c^&fSVbIP{[x6`ʫJwJಖw CݦM.xM`JkucV|îOvؿoS!ߦ$6# f-$=~g7 M;:/mNkMt?$|C1ͮ"9F@38\Ji×Tvg*L:|$ X'3X'3SJrY'"^ ˕U_?&+_ʨ2+g9wtQ^`hfU9ɩ#!hü'tOc1OVmb,=M(.ٽapK'o+m$GE(]` 4vӽ@ Latp')8kdalHEΒhS @ճgCV/{?8^oٝur1Sy7o@˴5:.-xݷx`cWYA;o^~j-w!au>t`ExKba&*{ٌ|UynhVx{wOVp雍(sE%4]ǻ{/{a7W48ZqXݓQM4y} SRuWNIUFNi"镉)!Mp&ͩLp]9I7zmano.S-E%Eqfi]o庙Vjt}~H Ԝ]T`A9Ջ=6K+Ջ|~w뮖vꐮR+Xwe-*K{]/R]CwŤM ~ 8ؔ֠,dsW]qE8-rWY`E[㮲Ӻpw5p JqKwW8zt-3( `?!hWWf{ʾmXog:ҲϓkJ>v|-)&e\)OeYo{?~*OR}T9mBȣJIb5L,R#PΜO]lUw!P>z f;E_G~\|+R 1L衸1q]6 &e(@'D̺0ڞY_sg[ 8jg}{X9Yv bqi:\2ԡ ֟>~|n8Dxa_g;z\y29UU>ی?g2syjk`L_?d \Q56dPeoJ,mghFO wǏgaWT+}@OU;?΢-#VyԉQ~TZ?u+g\S?1/;9<$eDoUv4+yAk8V" Q.eYM#t;u)SW_c wLsA8UKC7AMR;tv _š-q1ְp+#"}g&gn#tMqZAT"11USU~qT""V-ѻn{wY,fS=!t4ܠHhw*v9_4n疡'ޢ.UX>y]fpY%ֱ{5b`s*E `<$"Cku {V&x'i:_L!5|7ْ91Kݢ(0,rݖYSd<~FRn;\ W6q-׋NFV0BaDᒥ,m2>AeϤsi /g& YgcsM{# ӉsRs;W`mHN8E$gdht^ГdJ \DFS 7ѩ|vT3k \$g)6 %d0J;*YO9T^^".Vڄ&l*k@%vU*Fuw6vQ6&,eA$ڈD҆ZVR2d7F" ]M˙:w[{K$tl6/vً^ 9@Nk Rdaز̼f1)z1 /z4ebmmŽBm:c @hwѹzzb_2X| |~?xltOgv, ' |Xhkbw5ﳒ_bB+Oևhut~4LH=/V^=nJ:2'ATWѺx/֟y5!;3WV[9Q$G%T4Z8AY)B*S8L`SQw~̓6F7bf/K>[6QFGυV()5khm:%* *7F+d/Q=N a[ZG (W廩?OoWryگ?JhPzy{QB6{3|.8& icF.}tp~X %β}[s{~;C'>&B oWo3/7OQJ+p*[0ܓѽmA3A=P!4ҫ|Czou)C5P6H\kJ=Hῼ3&HyI`*Dfv;=ѳ7嚍MY\ kqw:BEϖYVia.qR`(nHodX g6^ŸNzK{ .8hʫ9f/0ܙdB5J,7IX7^PlLT3%*MC5 Aznl $ TR$y{eQ9/SRN٘8t{ w y|֭Z(k2o6+fu3U3k.`U4-?4::0 M Dr@RpL/ы; JR JRYST)" 26fތwvvMP!gyͬEQZWy8C52E(3Kq& ȑjya)/]2=@t1*-Xĩig]̬Too_LbDs(sF *ɜhNy-E@ahDh 4i_/I_8~tFH&,%G 8[0.LL\'9YTyo8s1PYDRdbL"P?S/LO.LѴ =tK\ d'Gð0J-Ze` ϑ@ ˂qIDV#Wq-?˹rs}?y#Ҥ|y!Ck-r)|9>aOdL L9Zl`Ts,;ŴD'[C&`.m12˰n7@wD2C9V8I ()g \A(E}ν9I~;|CUy[8!13|ԭ+pR7`V66TI$RQZvY$S1vjdIj`Y[ExW|xћnjCz[z0xdėKoo{KJ 83zaaaYWbӶlmւ%gN@P2,VL׳빧y==#xebB4],'sX:GRvJU/)ez]UuB+YrJŴ-\t>>)$$yYUgR1Ȕ>0.j]}8W3jDBjR?ha6@I菥~^|E#2IIQe)I%rNT^d/F([W DA€ʐ6`Yq\8-/{|޷ZlQ>i~}:/"[z4BKEldfm)$2p sZM"K L ebj(.h8$.=Pd*q#C6Ix]l ) BE-I_Yj.7Y_9gyWo=n7EŮ-," b!WC)Xn )j/b )*%x6˯ZsÖ:gڎNN?HSn-Fт:bˌJY0:~eIw_{TGEIkdVzSfxd2BҚb ^*.TFŒv! j=S*0,/)p0H\dLĥgSzI/te9;m}k :H}P cBN##-L@@^tzdCfby1u0+$ŤY De@Bb,&0r=V#gŌj'dEII!'_*e5>g Cxl Nze6oq'uԒtqSr1.Y{·Wʷh&`h5r f "LBAKMhF "TEFmMF$Yi%=) }B3x/yL Us;2Uaa5θ/VN+y+JfgE3b2ߞEee7A髄?1b̜kChE*:̉/)ǐ/znM3 AEd6պ>Vf%릧 &C S&j@})iR,ck"v{w j{ vǗ:8!{ Jcb2,tR ڠNbԂf*d!fȁH"a Qu ˆ$/< "j)kT#vM+dT7>}P xƙBK6FF0Y ʆlYb^XD r@洄 //ظQf$W֧ $ 7A s%r.6X uB]לy0=BgS]-A)id:aE vIB}^3#R#,Db`1 i Dž6ڹx*x2;z6BO<_8'΄/h@3*j2rs 9@)ELJ`*> 5M>FxZF6{yfrB Ժp㴋&fQ2Az&o˪U8e6նva;YI_E?.qf_9{ΥWF.u1.\@}t% fBΙ .ibwUKi|g4e7oÑ֙L-i[kkRKkb`Tz:)vƘp95,Go;/&cԼ?;`w͟&-W-[oh.^f4 &E"6Hτ.u~I_HхNoi*jYE-.4č5h~½^JfKi,oV} 9IsFsƗ3?qzpǃP'Ɵ>ӷ b4;z|d7oQ dyQ'?ɸ_'?]/^KRʳijpZ߰_]m\w!-C?"zͥsΏ]ܓY7nT6~l'On\]ނq΍`f9Ku;>7[.=5ӓ m5H T{1Cin\ұ=#>Aތ-3%IXο[(Zko:5Moy6%l.QZ4|df؇ hu;{lz݄wIJLlbh/[^$X{ĝyB!ƞI't= ӯ/_Jr ^wߔJm%Q d.e4"s&Dڡ mfB&%4^iK|w58ܾu½Ǫ=^M`+R ʾ`?S ֎1I?ti|$-qщUhdԁZ)eh;ݺR,>cds)ͥ\2JwƽVw'2#{i-A@[0y!)P3#l5:E&A:Mب`w[47DI8FY; cw*$+ d9[ԩK6i;{.~>7oFFO WlwC+CM}.ZAp8ܭԷ)ky۠\ݰvŶ״GL4i!Ƀ)MJZRPkؾ&-TZӗ&}I N\;*򃁫B}Bˁ+KWD`ઐ+աUV}B +c5 v}]*c`۹5߯;! 0u#ͯ߿j& Oi4Ow"Ѫ\}~;f$\l6ve7S?0~Tvl{Ox6>МOôԧXARC_ +I Lk&A|50~^LuH< 3i3䦸tfj _Π28?L:Oz‘z qu\ajӑ}TOW\ ;v \Afpejqe*U/:C\yeU\ApZ2mWlqe*^|p| yKaj\q\@+S)3–ܕ N\\݌2:~JucW+ ^%oWl;pěz>yw.aq$+m̠~+2JqzҙA ]VpejӮ_+L WʔtKa5nW&W7+S *#}*g}t^r|~&CǼyRW"O~OVܓPu7;r5To=y>3s-3y; f8Z:#8_"c'N}x8fQ:2&#k܁*i*ps8x׸]$78\Z2/:C\y񁶄+fpy+2"+S3U9 G0a3*:LeN\!خ;s• ~3\!\pTJ q%{6+.nW&7lW6T]#bd [ } LVpejq8w93xJQE 8OYGG/| :hf#]vO ;$ἥc,tfrf.2IO=|nS/\.->Uw|I%:Syj ;#M)f Ln [=֩T>>tʫے2Q6+~+©Tzw* "Lw[Ǖz3%\qfperMm>y\A%]YJcWi;Ǯ 7͸+SNWѵ^\#_+N6+K\c=\0|93xR9 E+6s!FOW2\9*;$'|WL"ɴ&M2/s~G[ &X!90mjT`,17/7lyRߊr`carӑϏ6bɃTF:7z\zs(0  Ln[ ѩTz qI 菿;LnFw2p*aK`U?0OWPsշ~qʏ:Jmmkc,صawC~ëWW^}r澽oPq]CAm_w (V V6O}(7|(zLJ_ Rgǻsz-aC~ߵM@"_ۿVyW?,+k9ݟ kjI?O-և~/;ྺۛ=Jj~{`|>ƣz({(X*??0ܽ1, 4 @x^16ϑ)0? }$C~}ȷpO_`י| ?ߴwA}_Wė5Y^k\TCeF-jWý$Qjo~v~vus{$KPs.ӌ=Ytsr5 ' kBI`wd=t=ה}-IGp=DB*-W yJݷ ذz,{{;&;)>=+ JwWՍ :ꝣ)a#!3d]`PM:/j*a?e !E&JBsj-T8Es0VrUQwޓe~&ߔۿKNFI)ֹ+(krnq|L2bOU(Q'GK+c&@+i:f͐iQOYa$[rSEGV`{4+?Ll,ђcʲXkRD8F #%'г("Ub eSQjCn8\*4 Fvj)Ĭ*rEȒBڦȭ.ŚG Ä4K1ֲ58$)ܙr % e? `)IFE + doS1UD.є"uȲKv&RD6XGY+iA*V├.KB@UcWu]0% Ph=yVtWU %@h -p_Sl-@ >f@ITܞڰ^GwĎ 1XQ4e gjLP֎_ݫ}Ŋte;b699x>KjGrͬsGӄ= &k;;+^=g, fac' sH0 j@LKp (Һdѐ&XVbZֺ جtDhv=!-ūE`Z`42:Uk< :]^ s.x@xT˪TBPlx,Dg60i[TVuS]߻|ԏZź7'pe;&{k\6Hjҗ c%f?7j)dHHgiX}vFG+~wI,@!^s Ѩ\j9"w!K_)W.`5L53 n3bQӪ]B*m˩$ZѮ;v"EH'^AwX!mRpKbLfY3JAX0QfeA7c: be#{(qgCZnQ:Bf&ל(ˀèuXo"GntOuUZi+ʁ*Bq Z7Q|gĒ!]ud1w@)ŵ*hu kiεg+вfIh^`HV`?,φdVy;AvâDD9NX ͑|yAXչ DDke{hTp=gD` drE@σ'gp+H}v= b4M@rwB+G}PɹObaF\ rA^M U1fYXT;CŢ!2p,QFr",w:X5ł\sC7ko1 3$8BʭdU\ApM;g20 $ढT[ QkOu%@٧?AjoEyN]FۡkA1b Eʫ.]8 ,sRb=x;/O+<%vXAZH(LfOQ 3($ci>030wPAIeIT%ZXU'j6X`em;V7idm!, +0pH'Ya KήTw D9jt.|ft]kq͇K׆8 ak;8#ϗp}=*:+ C ;K;αy kwN|^ٵWy{g).i-<˫[+>w77ۥgZ{ȱ_]6E,L$d,DܖGϽˎe[["K8,V/Y_QLhl]P!xx5t1_Ά\3Ѓh>kN?r6+dz>Q}epE3?~.l_'AٖoOa<2e@9!ZJ7* -ĔNb@WR@w ("' :$."'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r`N k (A?@s DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@:]' Jrapqi( h5PONStc09 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@J5{;|) p, fs{m tBN "%'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN rt@.z+ ^:Xvy}._. (wR-d CO(j #`F1.\1.Z+{o\K'a\S0 + +,+25t(%:A{ ++gH+QWV sVDWXJ[ ]!\J+Dz-N3 kQ ]!\K+Dku Q:Ctutتr "6ͻ <-r ?u2!mdQljP!ot84C,w0{fsJޔz>ems$/m߻bZj~3h P*`: &3ZQx?fvH͇ .Ejt&Ȟ0Џ?ƟIuqtVttVEֻZtZ(y{ p\+f5:iKmq[ueNqIT)ٍ_돯pŞUՕ T>\(BPSNٽj)N&}LJEi*eygR٫:T;캖uN ͻU aUG&83A锓C$@E3D^A*P1g}ŪvorCG7͠^TiCRr\0(bTmk+V󦮫,dVqN<;_{交G {pY@Ҙ5Ԇ4w1m9e+/cm.OUP;."AHRdYf}VdA6F;,?ae2)`c,Ͽ=~:M. OZpMmlѬaha{]S'o>X#k ԟ&?xo,҃f2/睎sϥ/@_9>SGkjŇ!| W4gtv~;m&YUҷP*\*8 #ׇA:}eA.=yNx!\8,&Ǩ ?b:nBEfl ^ftū.1J ]3(.?|譒p@^lgݫ}JW;-1| .@Spl~M'mIAj>m g1\%nmիŭs,}v1צּH'5fHiݥ2Ga8ƆKoVEUHO!XHFY]jnbLQyULЧsZeA&2nYF)IeOWGR1ge\ťmͣhv-z}s:0cu8 48(?Ø1FOl0}6<Ȩ~Q7N>L'h.=pcGSkC698ub_Zpa~QyLuh:,~}u}[{8_nGNSyD-[+>“;ԳJt`phC2bo,)C-R\}wCq0UGOeM}4́kT^$BjRu{;;ry;MFᡔ$Ց$KUKV "tr A!*)+5\mLp,6 W:u/fplSpw<"4<*@߭Mm;?exx֪k>' uH̚,Y6:؃? w`Q* NFaRd@zgQ8y08?oij x 0_qlqz/n<}Zn]1\àEsA&˾uaF4D^{qe u]{h_V&dms0;m> wUt}&6š$27:aUcu\WrnG!Xe_ pǶAA}&i< W:-THSGtRs5עaYj%MJsOΑ<'IC8Sk)v).VՠPil&]0*yc8ރ1 b-}NSGz:8Ĥ7 ߠ%J`90rv EYK Lɨ\L1&L/ޙFN=qdZ(s9?gܸ38_^u+q9k&O;:CDS~?Y7ҮnuTZbAC\+b UmVN'+2/=bKvxSu \C]4mdN Jĝ3 #ZLcM nqIK,Y79^ ^ouFoktlD649 :@ !e wɛk!u *(9T0^ YV( Β=VcLPy~pPΎbjB&)蘑Y!MjD`Kf \PPZ6zzn̦}=9k4ԍ!@JԞ[x8F cRBNFErVHa<АwX&2.]p ?fs :c$Y?]mo7+kU|/ٻd\bd(3#%JŞXi,SRnMNw5aUu4#gO9^0|X-C GR)WW !?d]t(PK K  ކc-a"p:$ PO%29C({1eZ)ԗgb!KfVQjjj:INP3u 5%%0,FQJ(>:)Y I ʄXHw p *SO|":U>۴#@ú\yL6yA*}`eVjٰxGi9yv^E}O6=,VªItQi/"h> 9'1)Ohvqu4?>t4Jj,st,Ja<^Dee:]|d/o4({0MCy~t/F`0BgP?eӨl%EH *B.d(^ YiKJ,lqGH8I 6d :Oւ5( ErƜT (n;݌}h*߻f=.j1?98o>o;/k[={85늁X4ep^k>b+ D(Mw}"e6wӻmw,\rWx* tEbܠߎ7j%?@LW⣩ $@n@LfUd vcUϰ*Tl/fu)zyZ7}Rqq2JTBFH htƂ@-k6hK9%k Zc mQGOrɣ[T:(ZZZ& jRda (Mfm2BwW( T71KyeWG7=]#Ta(sU-';վK@l|D9dz.:Tj~1B3]Q`XV.54{(S#Cr( ]f2%`"Bqޣ*QVɈjVT"$c ŔCd=2E*oT@,] XlFΞ1Pwv6 O޹/N7D\f`ݴBW{"Pp=n JNTBFTb1`C-%D)9A9BT{eSm"C^] d/ \l\%f#jjV&/BuRP3rO\竲+Y|죕zzaS-vhY>e/u{dMv4;/wM#Mk7:a]КMJhP~% v DXB'Y'k-6'T$-Te]8׺.:Y#JB^"fH"$ #+XmFΞ子=$@xvˏ ǒ $y$*YX+f TMtbXJY&n(jU=a (.SA1# U9&k\j[9{x[*Z5?,URܳ4vtol8;킒 &A"H!r9)Y4DlyNlƦh-)$ T@*LOMGwr``jRRc( R9w#c;]6ba: G,|P,,0D UMa PSh :y#6X2 &#jm_ްdY:9w#v%s0EZqPP[=jq|HK)H6) zy5kgu7(YuT(b<٢ 2kي&k225  fj)[nFxۤ CAfq_Dƈ#"xk ac#kwوI!pD&]SfZT46AbdQ31ErM6BvDlI+#Hغr+rpQGe!:q}q"8}A5B|ԑ|J]} 'ևYy:wuc<Ζ#3sdFz;n~|Go@Y>?u*Ql,ah". /LP+2mУrf:q1N#," )0P]X=O1=lHXDV.!XTl%Ii|~SRX= 1=:\2?n<~s#nrƩaNϏ1))VE+f0E˦8E0򿩯؜AG` 8 v!gR>u#kAZIJnuXǔmۆ|%g1!!wvhz.Y=Z__|;xT~Av ;..򳞾imfT Nڬcju9MB}gߏ6C# W?(b5׃ yyC^$Sr?`4Mz'_jm8Or \:^]2ք.K, E(]#MUd,/'aX. >1jY!hu/I/,~z,Qj?/ Ħ,ݻ3s5iԝ0,n- dIohc^}yly9y;2+AL&lc6f;-Os~»9cJk6qjO-: 6O޼ցsɅPh3}B ]ng6㋹Ζ0j]6"e h)KB4 Tsl ظ΀GNr`H"5z! 3a͹֐RIV2x) RIk k)Gkzyurnd0N6m.# 2(Cj+L Rr9dS,uSf伯O[h! o ɂUӱU})k f֐Y"5 ˃ [w6O\hLY֨ϡ=* X& J1jU}x6jQ8MԿӱ okPܔnJM)>߱zw-1D9FcnX.櫵_sVɳNH >^t?g޶:ųWaIᥨ_-|_Ox/?Zn}/yrSf0e*}0R~Bι/ڷ싍u@ʌ1y& L^=w]|Y؆W{lF)/u>f wV'r |_z!:Y'?L ޿;8n$p%ش*w/r/l[wh,Yh&#4?6fE֌\ѽ85>r&R|u|zW=wy >q&wcm2ZU +6^-\g[ǹ{+voRC);"ֺBk R!O)b1&ئ1_sJY2\I?!X\SB_V/\z˿p rUgQ/\7/^3 GWˣbdE >;kl7/s囃jG>l׻+LVvo湺.FvO7<񲱍R?06_ay.T;jNˉ ү,}Y%=晭ne1J)#)ucc k˭󁸞ck2wog.{]'WsV1hQ^=Sע=L@sj#Lcus_ƽ]؇4^2l+rTWQ tLywHUՏH4d:粛 wW?]#va*_~#wqotbiÉ(΋Nۯ;nkM=sbRHm@ku4{)ұSkH7C};nΙ[;S85z鶇: 3 "D[l,ͻyq{jCc=a "LUw*` @=VK35(hTJ}]v`v.ZnG1]mR5MkuQ]h#jkxj д6%g%M9z\>kkTc|#Ş]#;㩍i:/6(wnWnk36fnM,9ij" ܲ:gi[/)IZ4XFm,pԴӰՆmxAp@ÁH6hl!JCh}c$5m1=V `ȕ:S\ m)i5xhԚ +vܠJ+%s+0rW%$W }9 `)r%\ IF(WNKrwV/F] ~J(Nr5B>4! z -e/WB1=4x*(@737W hc6tʔr7G3Ye /3H3e>b?)onv>\sW1+JHN/@זz*u]Q *'vmD15S>f:1Th;캖MțUmTzCա!]xϐC3vLp}1L!i@mW9CCUO'nADzRƾAm/Wc'ڥ>̼0=CJpє"WBK.wJk'\LArbJp="WLK.WBzʕs{+]NJp-"WBwLr5B^"(H rZms+pY'\3V+X\ .Rif %IF(W֡ bࠡ\*w%.WB"IGuAr gPpޕК{WBhʕ88,F#Wރ?J(438F *E||T`f j*\o _b6%6ƒ%󳓮GK~u\fZkJ:M]W&H PxѤr$(HxL9݂bS?鱯[V=*7|Q/`ޅ'U/'OQbfC`] V6C'<艋X\ @048 $WHa9+ծZ}J(s͟YJ$b`\*w%.W2JaʕQV%*G+E\ %IF(WVP\ Gp="WV(wJT\P`uAr m+:V{ڏ+i}r*ifP}9CK`J+%\ (WAY/[G@G"ev)gW1K~ᩔ,՝)־,jۍS͖У_-詖3=Fƒ) n?^^M YdG)dwm*(PAڵ:(6Tk<ߠxS>Tֺ$|~r˷g?ݏ]wGc5:]- @n} X'7V_V76MJ:UD+:vS}}p>̋1mn€ 0|{~JZP)-8~w .Wy{C{Pm7h!W8ծU4$W\\ nPjP˕P$Wc+DɯP\1#S\1:dJ(jrё+d`cʑ+ڐ\1USjrElAr8M)r%r+8h$W o&u)r%s+4vʕ՞.H؇bq*fJh1ޕP}g+G`vG+#W gi5)Lhg+OX*HrƮRju ˕P QrQSd GZgl={׏)ǍsKMKINa]Fp)_LCpHh){H(}1U%m rzбݽhPRQ-U+=ծUƹ Rb\l)r%s+~Z2HؗӻP\\ en+&zҖkȕb1Πj\ IF(Wd bГJp-"WBw2L84$g/w%"WLJ&\YgPAAr%vbZ=Td?JƮ(WN&tAr kg%WP\ >A(Mr5BfI+~̶+8Bk] (WA9OfUTIf#?R)KdÛzP 6P@9iAV1c6Nf\P^Z<`򚫯 2;\,&nShMIBpξz:TXtU?# SEzMrkճS $W <|y?ܡ.WB۞\=\a0 )ȕz*E]r]Rt 2拑+8Bk w6Y(LIcW /%W;t~OJ(i1ʕU$W m%,E)QMr5Fr +93+ǮҚIF(W b`yt?JwVwJhE0\`^ƫ76ρJX#7H)wmFrA*i~Lx3,`J?1Jc-*ppz)J?\x-+3ծU9k ڧxJ3 Q; +6T\ ȕx?`&\i>䊁-w%Co!އ)^Nr5F"7r%+ T].WB`Or5"2@ǟ 6 us3~~*F|Rsu9??9?\c\,<7?*'Ejfsօ]>xhg6cNe¢&-c{_|9{.WZv4׿rv />{19`Hc鸟&^"rksZ&.-K;ZX˞LE*tqmN]?f]F!,silj_ыnti-^]OCPoڕg/=ZJFC;kl>mlھڪ<沝m淳[gWc`ҮUJ_MhWJ料jV b`匄 6ȕКJ(QʕSFt9K3 >,B(j rukSE5ެׁ\V*̖|7d1Fq$d_ggy:p_^t^l?s^ jy-Z*-uZP?{Ԧw /ŗN''&^$9Z^~ZrIuq9?gcY֒?/>D#)2xW<{[k {oѷnӤח\c]nU[[n)k-K3kjq5O_ gު36UZH2gRr`!||4ݬO|^~4_ y C%~Cnj)VeTf Eż9>С[>׋=CHMXhljU'ƚɝ|ǿ>^PkVwv9|3ePyr(ԥj*٠d1fPTԃ,'$21`x= &0|sm)_m^A [ŪΎ]`10!ƻAv TxW2GH}1[Zm*d`f`yG a7X`hVu=V24MTLF輦b>KÝ.Ks4/~OyLPBb2$kuk+<o 1t|1puaV% 9ա5PXwNu2Lg`MVb19i1»*f]`)o"0hlpy/FWfiq0Q8I= F6XOQp| !nI2[1xzp4T 6fs˥ZpU.!bc-ݱPΔZCp$bnŲs7 \Х5λ.?t= }#W> ՠL;t1.?-a7%D L#td|Gtb7DWaM~6'tghpGFϷֹ^?^ߜ_Z7?p{_MDUw>y;xuq7xo~~yeEW(9e[Ck>?զrNcuRG>O/s?_^ݜ}M8QO[~q5\~N~_OO8]T:K=|귫q}QuҔ'M*Néѧ.*p-'p!?)Q/c \Yl \IJW$p%+ \IJW$p%+ \IJW$p%+ \IJW$p%+ \IJW$p%+ \IJW$p%,ZRzcwkbW@Kt+t@ \A/+ \IJW$p%+ \IJW$p%+ \IJW$p%+ \IJW$p%+ \IJW$p%+ \IJW$pUWN騗rX[L!փ\e:WIJW$p%+ \IJW$p%+ \IJW$p%+ \IJW$p%+ \IJW$p%+ \IJW$p%+ \}hWrWڤ>p))1ą$p%+ \IJW$p%+ \IJW$p%+ \IJW$p%+ \IJW$p%+ \IJW$p%+ \IJW5pd!|'z)a-{V YG7 |9/&?|> |1i,v +Lq1tp_ ]1ZeBWGHWIEx{vnzW7Du ^]"@!/ j8V@G{{%^YxVoA]~r9+0x] X[7.*/oW7ή;{?wB~%ĿeS7p5a5yRc^\K $ BY~w?}^]~nզVj%\=) i 3P&O)1q+V QxQZQbo.e04mgXy4*4k,IS_0oF'>>uM֪zM0IL_T>wzV7'p=oy6#yiVʝ:{ g_~GfEq;nvpJzg>_7K rg 8Ÿ3΀6wgqgw9DEmxu p0j?C^(EW~BW_:|`~#-nrK+Q#tuteR,XkbV-;xu(1ҕMv{gb1.):tbZ1%hCW ;xb ]!]DW Տ7,fѾ1P&:BȈKڻ`kCW .-C+FUT.Z b1."Rњ3(l#]%!\_-O|QuG5mgS NRaȪ6h1ztyA wO-*5ly{N{ Fh sڃpf$ο9qto8]7ץZJh?Z]=*]}k +bb:]S1XQQF%tute 9O +CW_ ]1گ/+FG:"~IZ]1`J+Rцtt(:Br&9$`]72:F8,K\3h t~tw6Kjg~9l3twl#]E4^й+cmNW@i:Fa gfQF(K|^廝L> u=fE笓EwKk:VL?_hIzpt\!>O\L*mSHKU-gŜ,`ݝl&Y|EZ_O< -WP{o܃՗N&1-8bquQz#tute(|L|tb pK+FZB1ҕ[]1CW w9fh6NW]%]:;v0qvqvU7׀+{|_|{un6>?h[v9OnTt0E|aYS'x7/~1gu}b ᏿پ5.{}ػ6W#oX}  v@cFP}[1H%[ƠDلaYWeg_vPYq^zJ @iv'aSB&y2Ad/E{ML/"n?Grfzgiɳ= =+RWg ňڧ~W"]~=Dhg}|M/GiIuS~8h5@hM#H!\cY58ݍ0O4m^RYI7}|ǏIOޏU?6)? l% S+Dd< ܗv^o{ZٵbDNF>\^ V˒;VF^}o~r[y;x?zWty7tIsېzAH(jHhG9#z@] gľk:Jmhie7aүFgg"m&A.y1lPs}c*%/sREEmt\ٿ$=WxSRnʄ֩l5a Ee8 ̐:8Rh4΃!rcJ2 g*LQUbZ_(2fk-Pw5 ᦥ`}L%ޘ_gjls-XնGCut d ~=\]*neⲟP&aV'^~ʠ,nW?iI߽^Qf6t_n-4|t)z/]ɐ-7;@xxX(%*٩0TV(c8ą`QY ѢRDKj9:e3rvr㑔$xN:CxM.0)LF`-Hł1$KRl[J ݄P5ݣɂc-*c=1O^x+#Pm3]b>Ef53xĬ"7)@Er0FUrGO2vT2PF)#F$?H%sΐIŨA45D<$us̃.{õ}C0 V<3{|AɽQ}^No_-7|b1&^mHR8,p5xޛ%%s7PcNӛ7KHK߷Qˢ[b: ~ [Y]:vo$|N?}U̝/h:g}|MqЧ/lt85 ,3i|fVss.9u\}hop߿itIIL{m YC3hMyMqdFjo/ZHacpZԶhYFYL}KuCv:zYJe[R1$NoљeG')Og쉛g7w1힎5xsi 7v6^+XC}FOwW0xI#dd%\ՓzVd%L.g4LL3m2ի epe*-,re9&Y2Zbt>߱{!Q>2ۻeDN -{TÃh>ː F΢qZ(+͓L \L!'he$ %ma@eHYrc,q\"t`13q43rM(ǭF;tx]ܛ yݟ3㿍x-'|WB2egl/`Ou'O6,,y_`<ڭGg01 eNx'<]OQ :z.x&y|>N{m=0Hc܅}\|Rȑt2WY$QdrFAجql{׏x) cwPN~Ȱ<LJȝGrX&ƑgA\Fluts@oʦՄfi֥4/~:ESo0* ?\aGZR$[}s`妩 ?G_O UX4Z/nmyPl65ftIm5'HLJq X% /j!UN jX-T{*X-]{Jgƺ4`0T]\d;2:0ˇ{}-H=lB'.RdLe&'(0enQ$ȉNSR\H0H)78X#U,pϝ6dὡhd:%3rvǩj#E:tkOUM9c;ӖM85ŋ[ř?- `㸭7'O<5r|4syFC~BC?F{#c8x"t91ʦ U֖(#:5Fd:g H0qsOF7r߼dMz%%1w}l tl ܯVKL~>lx Yf'αEdt'W9e)29/tvܛkmN1E4FI@[ǐYc#T@Ib8gAc=k*ʙlMBXt6IO;ƜF$pˁĄ+:INI}g|C6tHQ{bU]WuNrɬ&ƐHR9©YsE4ByIr;n#'9;R!5wp>8YPdL Bʨs.jj:3dRh81rJVDM$L6ELfMX} X:#gO9k@'naT}4LBD JzI,- ,K䂄̐"IN"Х\FJf4IzB j''<?O90!12G2KpYr֐dHRȚStQjC :㉕KV.͢}JzXFi$hbrNp )@$d2G\Yzb$mEDIJ.X39Tx·xFLTG[_J{zAݦ_{sr\u Zxt'K0<39T:D!~ h9d 6FAMO/HtEB)䙒MdtlHd/IirߎP,Ê|@)VYu <:)37Q=%M",+sN7.',Lh3: LV4RD((׬R%Mr 5*튜= -1XO=lôWdt:l@͕y鳰i8a~$m3w_Fh&7\2G^u,~l/-Xe<Id`0L66Up֐F!iлfqdžqH48:$-a݇ > TWLIKn`FLW*gg1>}3 'n*Ӵ y 2"Y̙`enI\v?{6a?01)=pdh8s1RDo21I\úlKG\:ɣOðYqe` 4@b;ò`\yh6#ٓ߉TqHů貪}<;i KݽP)B.K\9`b5(*ũ^CŸ''WeWc=Jqp2uNʔ&{3Wܣ#e ͺShE̟OyY4;?_`|5l׻k*ܕ"K,S!#*ܯNxBGτ,K@z:9k&`5;Ah"+S*2'Rew =Nj<Ԟգ4bDs-CD:&$dmHeOw'^K&~p "W=)jIc,Q3͙ꧺ/UYr^ Qz/:4z4@0^"GH]NFCC 6G!5Q1s&`,2D&qi npqtEWu+7ަeF7g>4ˆ5w,]0K {#UAOW6=q QrN֠mDmZGZXo"̺S$G^ѫ4Cʌ( ЫL4LJYV<A3]MJBg,զRr2/AnQqB+ eafp~^%aRj]wbl2$ChZeB3U" EY~H!HIS1ҀɣNg'LAIERKNtNq+ ND-q"~3 Y AW';8deǮՉS; ^zD4&]utrջJ/WCps)tT*4LK.p mBH6|hFe4fDO1a\IP,f.gH.Í k2ږ8[zX-62v*Hj /$*϶8)xM6ɮB<];٠ lc0CHR]8ZcW{+C=VDbr~>_n*?cF ُ~TjJSsU-&Y m:&Ȑ2)'=~G>䐝#- 01oFd!^+Lzj ֋b(xBI%4*@:$ :"ȹL̔*EUlNեp4l 9 rsJI90:һmD)rP FvaO^aċ>h% 7st&K9 1Nse'Sf.Դj5oMMn;;0ymZ*+],-F,DJWͯKnN'IƫϩwX cҼ>9ay&#3U:B0ӣ 8[6Yw'kI֜,Sv?_6U\rtJ}Cȼ.kuBZj~YL /?,+D4aĒ'}%FknJnM]_xDY,Yr) Ѥ 4&lBٳマ0k}?Nߌ _Y?Og_ӳp6h /Y I'LC7g_tOḻȈp:'UUnmf TD,qB&a a>`tcuWh8VWi/_ѐ(b=叻 Y% =H1,΁j)rz!G[epi_]Ѭ:Kx \BӫܻeL9on{2)=бh QExW;暈} oarÄ>Ht 7 Or;Lb򓄋O՜JCaV <5E2cPoauWA5֯[|! 0LH~ot#zW]AFo#'zWm;͹lw*#gEn]?cW~̡|o_P#{-{j?NC3-7Q4N >lq?[QZͱ 1s,B^%nwr86_M\z{8N{kp'7gUÚ[>o 픷bJai 7Z-7.cΕ%"D5ЮkB}ϭfy}V|Z0>zR;ʲ>GwncjMZxY2 #up cvkUS{|{eBjIF0*8N\2E$/ EC҂&,]6}%,r/2#4umK+Ͱ7Td|PֈB?^ PNɼBƨE.v.)o@'=F= M%H5<]jd]^yW䇿{nIlX#<>!ê/̭^ › u+_K޶4ޤE6'ȬRVoZ_Jڗm'vqq $'gq ]|k Kx\Z 5!H>80ѕ"nj2bJ'ޣa% g-]\>5zx7_2J0PLx ]ta4 jW4oS'wVᔿ/od (y}ZC˺ga-PgI"@b|]mf0vXkdGZֵƣ?bVvqJAQ΄Ok܃+{o;r;mN 0ep!ȴ՞E6'XHLܹ39弃 =l,hQW]cد2h}|e㣇?-{4LgV)bjcrSe I:5<QlٯK<럦Y@mjcV܀oc7Sdos䶸Xam9&TP2{1T l՝eFm!!Zv:4d SFk=JSxU HgbD5R҇,kmgMV(A0{2;E k,6F&K=ΫzHh aCҘy[*y. kҰ^B͏b>4 ? Kmp,Rd"Bv+I aJ[}!ƾ!GF1C$5ێ$~S]`N}9m2Cm2p sFl"OIY\&Qe|46"7!&.<9))ԍ&K^Cpx)p B0 58k8 p"gٻ6$+ļΔy@ʔ2]9%1ʶ-1.*删kvd@'FA\JJf:sBu/_ ʜǗLpv7ozp(B``dfC}H. a (EI܌léAH9H)ji!*R·D K2Ż U]Ӽ.H)_BB!RcL`h}`1 kڷmKKVA_c9VG&hώCU}1fɖ7mWxUMi@f]9IސPbk<؁Q tmgj]AcA~ hp;*t}&ot_*r1va6cÀ-(3XuVq+aN~nMwԫ8.~NZ5PCVMQC)QW e(OBoJpAs>^'ePP)   7LbH&m7н33gK< P0׸+Z?R_O_#y/Ne&o ȳ~zvy2LY _~.gt"?i3~/ Nsկ.W/>]\)(2W]r|5>ЯBV|_:g D>mocܺأlcl&97@)_:C'8J[x5_ϥF1kRЇ} 'Ň{w4tiEo.nGہ}GЪ>\(kɧ>_Wwj<6ŮWi- DrxE͌y}r]gv-Nx>|ma8m^l쑓y3)o N޺i7|{'_lq#7C?|w7nym4(9#'^lx^؍k./y)5ڐW{knb:n|ѡ+{v{Oޅ7*LpwqH;o:1ICec׶w:cј/MYwlGϲeʕ&Cf=9mG)B˵S1R)Fs3Ao\:W/A2\ٍJ&?,>< l6*[UJQZId &40AP1; A@cAZܠEA>7W;ۈv ٶ@C]3[Lry_ޭx]]-(vUɲ|6ˉ*#UanE-MJGjPU S bfW{woྨ}P#GX]sy.|i ߇eCqlEĢ4fΙ\I %+4rMr)ŐG#wݓmHzm)̞(ӄzgbbHl^튽<}Ls_g . wфrn@pD٩MNjp,EGdjnA4OЅLgйLۀ'>i_Go2VKLAǎ؃tľ? k/iw3̋-3m؄3Vq.xr;G'3ngGYY&-_ k}_ܧАaRԱeꪶj%hmd:cȈ[ %r޹\61As,V&*poj3/̈7_d KZc*^c_ܶ|3ȫCA~ЕqIlRs1(u]";:Lk읰*܎ը> wlGeϲe.yم!:"N2( 3 ۥbsgvjT A 1@&J ͑8cD9"FGVg3g|@ɾ'ˤZ}{3Mjj-sWPù ``vČxuHJ[UHJynx&SоPGb>&_rFF  ϏFFpeρTT)Zs ^HVa;|yf[ be1&Hk g,@`59#6R3H_NXfJt=9*ʃ'j⓺!E6X]A:#$'"t5OFB*-d,rTLqvڔt55x9gj1`ESvrL4΢xm#^銚t(6 5s\d2 Y69}0 si4c(6 8XVb2 Q)0!!?l)9(I x↕F a2"beԬ<١`.ExY*e@=k9m)E~"I̎¬)9.`J NC`}T# j@5T7(NYӝ=}W  C r`?+>s/tJ.$&8=#wqݼ-H &%NuU]ݽXW'>79hG0*Ǿlفb3%Pl%9:62D6$B"ؔxFm"ǝuH߳9Ce?6! 8}rqyˆ/Ǒ>tbx}y~b{mwvoömߨ?ar/%+aP dSmױD<#-OK0dB"B(B,ch"UBVCҡI `69#x!פ޶19G8 (^_uK7m|jk8E/7YY T{4s'اKNOjNOr'!t'tG/~>O?:EY7|e`N5ٺ.;v8uʇhm@ 2_$lA係ULKɫ`RUE,QȢU6S6FcAۚ*<({KœUm<#)2 ULٻ6rdW ܝL,2֍"y oղ-2mEiRd8YF0_ 'OV@!ĐҐg"CUQ*KR8W#c9[ӌ}p&N,S,<(r-4f"7B&d|3b{HxcO`X42".GY7J$H21 K"67<8|1FfJ%rN.r UhdXFb\~6-7YJiG_P =X4>89p "hb VEY.I(AqKATe*JRiT!EkPIAQxပhQS>ei K1qƩ}89Ǿ #"q@ĝ83ۀfENQQ?9gND!0LHNe6**-7RpjҔ,eQEE5&P:8W#aĮ\}Z,%(bwɺU&zJFR2Qh º`UD["Tz》w),r?<;euAuXyܥ̬F&i_MDֶL+V88Ȝf4}Gı:rKjA30dDiH[.}6Du<^VicaTތlogͽ6쮆hK|LmP䙈#D.\p\&4ɂwkI@$U^X4Pc"ȹDܓ]p ǣp&|z;MҪea)NoAxG(a).!x&1S[YUb@w.j<  Tc`(6VxՊ<?TFQ;jDYj"qFYjAk$NΔt ǝSΒVsLz|7/wZzG.[XcmhK;^j} a7N9E4hbp(8MAxKYa@x l<z:g@ yA%Ђ eB& *+5e ~I&$:y O6]d4?dzFG?CQ&FЛ':ø.rwr'= Ji4cD%Y."k9 fJ6z7}i5^&MOߏ5VG56ؚX7:xB9o_ܜW0O6cLnXUSǴzZz(W_Y Cd5S3f(G6]'<'}7$?#t5|<+oi>={HlL Fv=soWQE-+dTћP\x(agž 0?mrfI՞ryF-aצ~7yA/ge;z #TWq9zMJ2O#Lw_橹/8jlLQ:Rj>dQdf2:P+i"-(e{ۜ8^\W7moP%5_Y-͓s 6G!SB> Q q^cO5Plp۔]oL|g6n]bruW @Bgigq8œJdCG70)~{WܸoE&<}&h AERpMcҾ,9)ja.27il'+[57kk 2MQVwQ:;?7'K"\jtmx|v}vn޼ YJ#}R֜ u3k?(tîȡ|ŷudOPԣE&;l1gGiM7]tW{țah@ޱSM!Q}w6]\quM7 t,P[Ej6 ֺZ8ց y*YĬ#>q';Y$hƺSvgk ! QbdXwMm̙])W6g(&*KJTt+ **{Gй\PEv,/zE_І\R7<cN-9(&K 4K$ )Z M"GA44^Ջ\%n<ܾecUO# G)e_\Ob&AY֊ZϟuӔn30*ԆiZ+f8 ,(Dw4]*SXXe5"9xOY TG^&E.$= "Ш(n \c* ZSK2%VzWD%siňxF '763sD&i΃4%P: Q1q,YB7 4dJTL4 OE]TId;NCey#Y(N,Z2 0XdWNIMW(y X'b!`<18V3{ɒ6*2٣̿"_.u& eLI /SW3-QtyMؽbbÊ5]\֟ks+|Djιo&L>^w9u#uZi.U_2U}ZuשJs`򩡄:2|ԂE p+Fjj7ӮiPx/NWR_y]c5&\0N%Se&Cl Uu o~7MOg#~|݌Mo&zr\C['A?&y]SCT̶uuYL=YZ̚;uʳK [1IkCEtZQ4~V`l{kvjoV>181DjiѴHVA'eqC-5&p.7!@ V߆8 =ZefӏQV7ܤSp>#w*oقScGCdRҰ3R)^IȍhZY\ 2*4x[k}fҹ]4ҹ9gŞso.o!.57F'H~ x1*c -& R Hi5!*$I;:\ս{;\a {Rs @+J#NT hcXMx>[|"kҩȥs.IRzIPDRBHZ24ᩴXLmDm#tE;0,sϽ[^xᎨkS1db*c,8osc]CLfs=V]uhP^-^6ENK0N`\3_ryZcHk %w+fGvR-(.q\4)Q&<Κ98vύEu0ZuOZ /4;x!?F>]Xvm7뮷f֍|,O6XvS^ j5^/5~ W뛏ڒ͑l:jSZO,Wb #|@݋jsu$ʞyBmNfYZ#jw3MyLpt|4pUUȾJI + L},.gPZiD T[:;շWӮngCKڬ| 6V3/V>&ͦEO~Dx68\-jO5j3|V^D΅~dro/kj6":oztlsNsbn$+mbBh"7JKh@]>ԗ!o- 흾2X}06XOJ%"z9?S;yJF˄Ѱ֥8x\</2, 1 W,|gCOYBT:ldPc݅jtMf% {os^/Fz!RzWA5U%ȸX>7FWz KL9QaYNj QAgiDD)Q)4 U[ؕ`Xb*}Mu}7wwi48vs}Yjm{3`=ǭWP!3:zE?2)j׆QC RpEr1ݟHd&PTiɹS{ $^Br*j6D+_ԏIRܟrrEpdOe1@ant9I(jबin${mT qY:Kv.BޓD,Wڳsx_npAÔrжi5RοQzE6Kݯ?(RBK)J10aU[T Jc*%-,YK! Tؐ; qOc5TجI$䋗VSa )T`h=FvZi 03&Ubȁp52:V=1tp%dM=qտfׁwLT"Z 8G ]8 -CME6̄Ś?"{33;;=2b1%*:ij%u_,Ԛ&6B:5¦-B =(`.:=r2[Hlڶ28Vs|ڜX[3ߐ}idZ,$W9Okj!w࠙リ%.I4s@d6oG/H=Z7m<|~2ݧְ=OM1ISjɳ"޶n#"Tʔ{o\ y"ƖzZ1m<a2:Ÿ i7Z7Q<7nY,W| yBb\$l&l`~u zgonƐce1ct R=Tr/e @6 @rKoےi 4G#jjçDkfMiH1ibL"cdӭ36BO߾<_ \ Xh(To_rNKH<Rl7s^XnپNYyT=A+& k X3'3a^  $éܯq?l'0վs`oHJln_DV8"T"j͗;f<n{u/cdAAS RMP#߸ ڤ[qr61Z X@Vqғ`AgS=1gR$2xC%EoН,Jno`zr0F]_aXcuGD5G߳=5f;*Z T$'(1,!BcO16'^I|C^Ywz߿<EK}YzKogT]ra^]\sY0"Ǟ^ԋOT=A,K~yE<vmkkN1dWSq,^J6Հ#I,JSuңU~I?4{¿z;ٍU+܍"rʽ\l}Rz*Z)㦎I% |Ohw~Wx-9k7Av)٫Q"ץ0h7 Mںo]W% DȔWj UU2 rSQFjh 9?U"f-!768NFk>\G1, &1,cpd8m8 zx-<~svZˊ|ķp.ϫ5}K呂sZlzj >~crm|6~yD?Rr¤4 yxApH~j4fBq~-^c#LKJ՜$|Cf!-|v#4G)[ɼ4/l'0;gΠ!:(imf2ߕbt@d/s0S)]%ZaiKKvHXm006n5D&dp~ [:B\bTqQCžCb1b?+՜{dPK X,gG/-7@@c赻\0Ѩ̉1c0IrNjA V ʮMPMMb|ه|<3FO k:~%_=88k{f;Gxs=G m?-)=[@,\.hh1m0͉^|ɉ%|䭗/eڽscO_gkB{#vSZϣє֍槯.g%V׈`#^XBYȻ짞c֖~Wֵr,.vi%%@!"qص-|BaoØ sKɈ!a@čUZЗ&M#ijuU{p5(>>Q=N/N 9_FS,s"EEȡZ!H%E|k1+D93p)Ro$eGniT-_62x*3חh'˴q0zH|{dzAp }C7 նO.f)YmOw8 (A*%usJ@wޠ;aTKrf$m2mn|y>L={O9퀅GY%t/5(e!J`ziu{NaTgyvzzy|b;-b(|&gi/)>|:o.ZJ44x'Ger!riVÂ#=%.17CgJʴ"CVa(9:@1]W1 f+rN'2;}ڢEC*GM )Zgid}PZNa4S'F 瞆N^Ǥ$.E޽jڵՖUl a䰤B` {mJs}cGrp sKQvI 0Zf'MRr\g5@oMENʾkIO6O ӓur~y߻F=]}:[ځwd}KݏןX%V\0S%rYx纤ނB5u[6NLeF⤭~r-|0uԔNMsߍUظF6@>bQ#8m8qVƅfC0LBqrvi([b//W}իS7,㣧mČ-Mfm:nM2z7_-%l~ӑ]DǠh_0{"fumJ(ђs5`D + Y9rzҾkaic-6]3'-Gjee]ԣq `%oaǫ27L#RGsȦ =kldFTs%qP jτ 0Mu Ƶ0PFɌ#nx籄AA0-b&Ԝ!wCBt0ȴۣ*RB="}ZtKϻaF |/9 rFqx8n~|G"ͯO/No _-?gJ>eW27,RS( S8ryk|%x꿃Aꊋ K-a]f)z@ccڶ/Jk |`t5>a;ϯ;}>gGu6Ory/eP.@%93 $ѱ2`ȡ#1i1QaiF @ PAS^CChc8:(>4;b10\h=874ϭz"MaE `sKP]Y';Saowsq\ReDvu%B\ $P3B"h\; 2Y)5le|pm#Iszvms+lژ'g/-!`NQMdmV.`lTkmj?Rb1me'1ϐyS8<3sxlNCrkp]@^jwS\SՌ\B]i_eՀ~֛WC /Hn/ r^AW7vaٝxJJi=FN!x .;]m$ykEKrlI:l:9i_Eh<֌(QbIqdf7U%mBI) ~2~HhZgtbޡi^~/'չךBsMlN Bte:6ˊgӫ'(c^ee(r9's! ;-YΔt ?nW=[Pɟ.F\DbJ05#%~__c\TAz\Y>/?›`y{ǏvW kZp&ٛ0סI0S}zwZ_0_d6y_5a9c"H&wi'&}F.]'0DVA`}r>d^H+&1Ҕ0E4[\Npk`{D|[!~f}y Yo1jqpE.:*4d1v4V<6x{6 &/3McǔrOpsJlp1}+F}f%QG73x]{gMlQ=goA~*lWYn@FuE8~[-C6 (x]ն vu/0:t3mc!w#ۄ'K"|Եy5;Ǐw2FYK8r6ˇut׻-mȿ쌞ZoWUD.M,Gfuy2_,ϰo#f #4Nv#Dugf'yӵt3ݮO$mrv\樧'cn(YYg4~>e2۝ 8T\W( ('–v^ ʪXZe)mZ'-#hYAprm a/)!z1BI6]2 gׯVK,F^3am#CK˺:jîZOodz'!kݜzༀ;B eQp0ԃkF1öTJ`VH3Ux. K(cBC ^E3220i5E-JKʮ7 gτۇ&xsF*QG,Qx ScP3V#$Ew>06ǷF(o$t)`>tPP05H "E(aFWUp%ծ(.ʃz`3?cQcRD;coʊXb.Ϊ ,U>[bDUdտ;='G˯{m;1pɠq:@+t`/ j&.gYRma_xVzny{xkqUxqҜ0F|jr7_Gkii7Gi 5"WPe>png~_Qϖ=g=o&z>x_"ZOݡ͡'v^K{}o/M0/|vkaz vMo~Y;fOO~6fq;7l @d:"@28mދ FcwˋûI_X:M%XDN:ˊ\,Fܕ枱,Z"Y9{* ?ȩE0%9eIHwR Ma-]8ng黙 DSSIc/RJ&ZYC3*-ijˆ; 9.cv'~bs BWbX2qUGTp`*:]֏rb>noÕB/ Tj'rN c"rs#m>}:K{ ߻=?qn+ĩ[~dvhn9tݡuN׳t>N≮0€"RsZM͢P~}f!–Z)!|?}sqe|QM ]"gU֎sq"8b>芝( -) aV1i G]y31%l`:U%zIDA'tҮUҕF-߉3n[r0 ^$hQUŸy.VϮT͘k |/"Лy@stz3|ww\pz(Mll{gT 땈FT.4 E($ʏ)٘F8D2Fu&#WkzZK+lʕ%zۺZLxlfe1np m9-A[GrmN ބ2\c:Sc`(.WaP[zFBT^H)ߩ7-ՃIv0 z  ZXe <[E!ϭ33^(9E.UuecJτcvѳܑOУz[11WǦ;89C ߚRݚގeR]fW@ 6WBfo&[s|Bo4&2"'Od?lTJ+;24$b aUݷo%'kwV`*v )fF,q twgXmsUK\R]KZ{G%W/WcG: HFW\!-^r5pYJH梁"W@]Q(W JE+6$#W+.WHi(W+I%&!B`)+U8HkXr[\ GVɄ IFQT i]r+hȕ&VXt qMEVJ5G\JtRre(%,gpYk-%W@DPr>NQ,1sJR*^7뫪ŐA0Nncm%EW,\pRVK{c2e)[1vw(| ԅZ2!ݪ ̌l *qE26Jj>|l g B$wjirWQ@KZٓKԎ26-䊎ruhS>~`X2r"WH+Irf!c T h٩;^Q(W1!dBr$#Wki*rԻT/WH9 J`"IH]Z)R+jXrTJ2MHXc]!JDZc+|ʕbڔJqBJFW&#WHwҨQ(W3#HBrF$#WkU*r--)c˜,re*+6$+ĥ2BZ} R 6mL|ZkW2t@iJDMqʶ"qN!v×ksvCVqȦRU0Th mQ;zDhzg_vsXJP,W%:4t>lr XʞmGz9nG[ k!WlC .HHXR\ո6BZb+tF+NȺB\IS+U2vBJ-G\qqo66~`{6K+ZWdA+JF(W+!i }v"W@kirUSQ(WR *XBr& q3oE◫r!ʕ딜A M#.Kf iOrb\\ii) tQ\!mrrj\Ũ ߵÕL#>(9\ P0O$SORn-v'ԓj1x)D\OJgNs(Z+;Ee)K}vԤbc m.R1X1]/> X]v]ѪZRF o!W|C*Ϻ#r) \!mrUSQ(WL3UBrg?JװT i]9劃dBr#W&hOx\! \ P1,fD%#WKBھ%hG۾\E `v%WE*r&z )8>DR)]!pBr"WHktrVr5@F3ȝJT i]RQ(Wƒ}\0"B\fR+;H(W+Lږ,zRmZ`Ra4놙K܅"Uj561L8V^vTI# V)$> XI+E-g:@( #L&ni;CJFĞ]/.U$@VsYЎVӎRF&W\Qzj &!`m2r¦"WHDrr5@$@v$#W+NEv;YJyzGSB+ĥXWH{ڂRQ(W`\Br YW9nG+XrruR*\&B\T;\!%\ P4ۄ %d殐6A@(Wg+S tAĵ,ZJ_DJ> Q-&-bπ]vK&ƀt q&cR 3L EpWlwv3jHưQN$ಾoEpJF1]//@ c>j,EU}WoEuO04:\Qzʴ <B\S+5$vBJKF\ame+>ǂSWr< ierr5@A+64B\u )cٻ6W#o1 d<$;/K"QIQ[^[10&yֹ44 ]1Ht@t59w5{(tXa(5 ]=CB8$H`j%xZOrYҕcm7<CWn!w ~DŽڋEzkB.gr 3=T-쁛1Y2r]:)Km4uP;skfMNuhj^c u*D*UEjXrT2p`ܽ.ǾNid?4NʾuUQ*bMԱzb<A6U>k˭91&La9RkPTA*Vr-X6FkUSmv>7S(FrY^ARMR֖ 9߭ZyRsMsjJ'QZrO}A-:͌e8j춸\L5CѱE{))w㱯pOEk3G6mՉ!QQJ=@{#Ȭ LEu(ιc0 ;K|Bȥa,{{1;[ߨ(S}tB ̺ب"?O>?Iv*6Fڋd3<%cNօ9`'97'!豫TRq[:JJ:Šs uNF :5:9Dm;f7Eע֨S#NE[RHhhM?aΧE) 裝rF 8P=%Bb !j36zm\"FUy,vf틅JtQ]|=u5sS"0f;TrNU.YgV+ ZAwT4,xԬ;v4F%~[(_ "M ]d0`gbjb |SZا\CQVeD9qnBVˮě BLtp EWXlčd 2vCL:vM\s \/S  < /GSfa=g5*{BE (kGU((NU@pɔ2 R c[l鶒r5T`ԬffC$8p$8'X( Źtҝ2Jt*T|=cفf2 VS 4+dW$w\֕ћ!ՠB+X7(c:om9ePDeBEAvcŨ3| ߚ k^BkڝChgՌ6XX-;(Z{f σrXA" td%9v`m JgV*j(ͮBUqwqpD쬩p}X8fa!dUR堛FeV]mDX5{-6otgHgYLg&)d4ά!u6R㣷x/GO},edYn$m}+ލ% ڪ mry:s_|y_\//`t|L{y:k{uÙ$K@`c;PrlFIQiѣlluPIlfmT5M/ \2 'YQ0+tk O  4XoCn֚M5Bys4 J.Yy,AMZKAp$Bi`h%aҸk9֮6\~B:*kp N)cAAk\7VkÒ>2R Kή-o.9sz㏷_]@'Viᖰay9+Vo^7|/g[In*hHqE-[/Czu}򸶳`AƋ\=^5\ޭG_k_)Gx?hu9^i}ry> Xn?tssx< c}gǶ^|H'fg]9:fh{ y{^,@X$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@7 d\ 搒@êI=o\h]|I zI |N@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I?n;CJ1CJIO> I9&休$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I II&@-ARbI.I-?$P:%I|$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@' .;֛/~<Qo.o/]P/0Z^w(W@!r\'Ń .m'\J \GFuSW㓱6˾Lg5~j3~:0QH>87*%g&P nx뿍w&ˏge/rq?:߮ρᅢ uɕd /A!rґzy y=zޞA`aw_޶ QUio}=#jk/qǮp&PKپ'X쿪L1Qb/W'z(뇖Y<zwoddnp4[Kk\v3Nɾ(_MQUvSq`` h[&rc 3uiY:Uo_rqZoXE{]jnߨȸWaRͳ?n?.: w6p0lP@Swg?;s_Aٯap/]= ?L]= 孎$=c^!ѕQi}0t5(t5~v= BWϐL~tN{Fl`@:! 3ٽ .IkGjI&~Zc=,mKq-wSd]uX]dT \;\p.=!B=Bq%=BiJiI WOpo'W'W .e'] Jix WO%+r2p2v*pʣPJZzp,pAvBpYjGqw:8i}.;\%OUvs~2p⊓PZ)PJՒW?!B-;q9 J)Zi•6Lп jѪGJ_kMmF-i,Bp (1d1]p\{E@R]*0Vgdž9JFbz\eJp xt2dJo:Aj qnЂ]qg-WK`8U4Zcpt)1T+XPs2p?K;Hk8v)-k)8+'PW(W i  {KɐAc+ \ ,?Ix zEp L.^_9\< cr~jP,TG~kaX% S0̖dDYTshp=77VAy8uke=ѹ~=ݦ۠ofX}={pG7qpG 1R6{EFEI{:QQJ]WkP vmAM}Y L>bz-;yY`"*z22Ng^i:UXOq^NnV[49zqy}w}zkkW?˓afޝUG_i/.NOD}a2H3W4|JvU˚e?]Pap*]_B^4j+x3f_g8,Fiت~ X}3L2HDeekW 6٢߳UW2m%^xhQy\-zqp>jWMqMȰ@'t/oJL*(#$|o69.[@)u o<0byzG)+jf^7ʷZ9:-pEj_+኎Y4$ɼ5F=.}uo c:_EX  E"|˷; FPQ$0{&0ڃH&8JX\ >˫XL )V4E!+$в5,.-/ q[Ӏ|1GlS{,Y'x!ahF:G$YIBzi%fD"XAYF}D.%eKu),UCiZBFӍ̼i MG.!1ypo&r?T*pI=OWؘlޙ^-P˞SeS<5.ZhwG௬ B[ƭ ~ j=s5o:H.e]K.`IW7n2sN`i J*2iVxMrFhoel#v_'= +4IFHsZm!8p\+Uf&7~6&nޅb4~rDgMr>a>{W7]ub;V)FZ}!n:YY|H?^2Pb)fLIh EF &4XT9TQ wV^e-ą3' (8 L_9װ7&n ^g}$r<Nwղ-q^2Gb^1Yٯq" zunԲ\~ JEQ I 8)RD[B4 %WGG(J}H"WsbS4e|0$Fn4Iƙ:))9JFQ]JJfj7rR!ehufJ1hX22ژ΃kfoIQp :óa@"#{\頄W"Qgv̄M*p@,mkJ݈퇃B1ӎcAm֢vng碣,b"2C $;F0 f/=4B ovYJ2xTk@09(0 .80X jMnLxX.*cAPD #"oEĭ%b &$((*B< wZ4UTZ7NRpjҔG-XF#JjL FĹV܋.Em\Ǵ٘a\-.uu~ѠGż8gY.Ht>&`jkv<=~^MQhG4䙉#Glj$\&ó֒DIb& paQĬ#5&]k)肹}{~a%h2/}^fq/>=upuk$a&)S[O _1E{AZ[`h!x։Si\JYVsX;pT+)(`w!&͆<3P>cBvDwqO>\6UL;խ(( ݢnaNu=Y)f \ "DOB24[ʔPzo OC3O.b =E yAeV ͑rr/dT!X)1dBRcBt]\}uqqVBtk=0l ĕ.$"JYVߘ/[`] 6{[ ]B5z~P!q4*FQi]0?(Nz cL%Y]vAm K6y4z4i6~~ǮduF[FOyy3àןN*Jpt>LyU'=Q҇i 3f(&~Lzn-bCK? /1dݟ6ܪ['XsO|n{Hfi`ScpDǯ'ċ4Iňdt-Û@_(аeBb0Y,; 4|Lb;uЛj~~BI_OK7z[ .(4%$h J3}p*ōJzVteN7\M.2i<@g;&"\0PdP"D&_n+{M}Tt2.*,*<7<QNq#oj -T9wA_+pu隮Ĕݴt#PEZ"2 _V ҺpaڏUde( B* F? %A?vTc*MxֹJ>J3"v>r.H Qy1xZNoREh~wvcЫo\oTèC7yk}ҹ)x%']|z.Og;YeB_]3Ӥ Ү&2yL.d^Y}K+7RwĹ}ޫB]r礓>z%k{HCW mvi.-zXȐs׶%hk:<޴Ζ7~uu>;[g..|ǟ1[]- fU\{{^ѽn&qI M._VoWoI{|amCI|;p$NTz߉oeG&3Ve/PUg"ZkN&7Kv^!e1t}3nJ3J#xCYsQ뤀3JG]-FJLls Xq"#otn 9-?_+rShevzʱ|v)5d_rfAWA}eӫ_}op<>ZX5mNѦg8ASI"~e.Cۻ1t΂~.FU;?H+I̚etRcwBgfW"aSƝfl,{؁@ex٦3fj2^E4xbS-ݼ8W#کNݗ(5V/O)gV`jr9T|L9G֋V$'DcqflN *Tu0L0{W_̿J~I>Oof7`hKkv~"F ,zEZF3pkf~P/{ vEW1څ(ڡ{bɢ"ܷGo$)~M3|(ofaYeݸ [? Sܰۡ|˞C7~/ay} ~@DWAe@I.bp-B!^>w-ÇOYX4&P^.˗6G\N>xN6%X:p=ےj^7d9 Z,nmuI7N UK֎tkO{B>*ڲ#v;ͭ9=O'=ޢn]-n6g׶>6!z?|#['&}ۜyͲnjvS?v__kxkss] |. ghDq)vH8y} / "6i;!bD "zige]({}[RjG޿`ARԱcqyyPzj:9Yrt{T> I˔!Zj3)%`9My}>6T([}LUJW"=K6>ң-df:HB݋cʺHV 3mCk#}}( C:AAV$0a$+(qرJL()a$z&dQ')`>Yע&J!+um Ys$4QbEAYmxP! -g-=RiKԃbd V8㠣688:KYߞ,./>M%~m11vB@ЎuI]D[,PO8@j*~;g^ m)x^&,`Y(>T+5HR'=} 4Y!ۖR7JdC4pq ƮǠ͐} [ .jt3\\yz  SnܑrX{筓P[jx}J@A0 NځsCK&)ZqJ@L&PpJ@T` ֎/IІTnQ4 Bu2օn}<斻t d A,#%h*ֺCpĐqk&ahT:8;6T^MbF^˝;+1:_:0EJlLʛ3(%'bdfWrЖl!NU<$y1!8[4/E x.ZEi@,h}>UyYѕu韃DPZ)gQݤ,DUjψc[MʎgyjMnq`i׃ VYkMf} ymk&O ~wIīfh#i_kWi`x 8{qk-;" :*u,YV[gJ[gߎu7\zJh#p4Erq:_q8jQG\=J%a+q{qK#W`F\UrhSRT*Ӌ(xUK.:uRU/^P+&+s4⪒XUܡJ W q"}]vO= t5ےԨ7sˋ4^p >X`]8j0Ue[i{Uo6`#͉k!]L2Xw(dPQDT2*%+P,ᦾo.SQ\``Q$L$ k}zwgrNXil޾;q"0o<пw=]Lgs];<tx9P}kL(JRR" 0$*-?tF)sbzMl|h"<'{sMS|?wd9aM}^ ^SR^s?6 @S'3Yov۲=qCa~/'~#~{<_!=_UxR[ڃy;r5|.E^+)l"CԘrTMN5(ٔ]}'*T.B叡Ⱦx(B2L j[gDɅ$AP$c FEdBi\Δs[Ne@"P/TEH]S@ 8R3rnEn`I@* )]M(L*2&G"AD!G Wy.T:j5Ib3k0\.-_Y^AjqnkvU)n9dU90r^VO?]b;WV?hz*oPg6VtΝh*\tZ& y/VA{j<UQ&v? > v&+N8 I5XpXoڟa l*导1Q_eeeDP#Ǐ9 [ 1 .>]/}Y#vb[+1z)n>p.0y2keEFѰ7IW:#G㧫sc Kã^|fkΆ?ki6r28ǫ|=>]1jcbgJeuZ|t9fngQdJud7.߈dGY߻:i[q1i.iR>vp!-FE-؟=YaŁ:,[ߊH!=9LIceٕ HY$Ajbɐdgq TЉ%S -}HN5([b T&I(3elYE[[6KknflWj=y΄?hS;w{}Db)Y˭l^Zq`ĎΊߺb },IW2e V*(%[xl颡h*!aSH>u$G|=1Q7n[h{fO& .vZy4;9]yGgi(! dJBf(Þ <s_eQ,|Ѯ#&`*T%(`rZ"j Ӄ( &)pک`#@ Ir^y:`JPNJv|90~=Ɯc9b!ARc ԀqC]spJVd[x(spVQz1cZU{F`!Q!Ny =veJ ,fADeP+WS}[+ 9Q1I: h:(TnWZ!LQ21h< D٤d3D~ȼ5Tyx*#}ڋiب=hAq~ZΏg2z=n [8.!hnDhB5+a;&VOY㾣Gֽ9SVfH,>"FJRd a eF:= HŖ5E(\aR@V#DA>Y't?3rSa&%fZ9oƼ3/7m]9>;+{ ]ȮA`W=QDOu3#oR+Ȭϛxnqd<Ӷ涁ޞhXC5[]}z,m6c!Bs}uy^:@z1vY5EjhLFdzm ^ʣ/` ǀ D{)jd!;PS H/B,&4AzT#Ͼ,!X/RZUC}A*Q}&XI}4=*hJ'Wj+W*M_%0ƫ {bZ,׎<(d X0F4>XDLH枂a9mE_⃜4lAe\ cY:,DICY8[3m\4x7[7ߺ>˲#}b(t2>FA|VEi -2D+@{ ! /P:8s})*gry j U7,31#dMVu ='qWۢuF;n!)2JF}PQЖEa)4Q}'kol s/<)x U) "0BMV&O2B"8 \J (Ez3ҳ3҃{c&™Ih@ IZJ:h+"Q,NC)垙qM=fzhxvK3ZlvPJd6Љ4^MŠNqt]yo:퉥 ya;W/;TQ6@w#o#u>蛀֥:zOѮ=}C16ƼµO;2l@&dOM`Tk.}L! 9uȺi 'ZKEtIxe22jPD0T|'39 4<į|YgBm=Y[Vߚ|,{o}P9hOȃ{²^ W_O۵\w T* WM96u,B㠧qڣ.V3DN4[=Dd!Պ> *z8C,%VFԄΔIQ%ݖB L!ђ4mAiCBEDNV!bgl'r!n ?^@xn.<^.< JqsC9mtF_^ `&vxu?x #Lr~3['elPh%x)V*RBPgCvz͸(  |HRIxl5:r>kJMKǞ.hsl&K54C@ʀR` 0sZ]0]vEÈCr-Н~|őわ9G= \Q*w` f$C6R̩VRlݩźR"_T&{sT:Ĩl^qJ'I]g(1j-Y[<'s|FvJ+BF҃ :J _hAV輵!kA;ِOfラth0+r=JZ B4/|"Ơ3d~YK|HvSA/>&Bnj!0ز˳T`2@= jNg,;?{Wȑ OI౪CrKKb/-IJ^O]Q-ڝVq=5=UOW c^kO!D r-h k*7&!4xWQ7$} MۈY+ДIeTX ez AyD4?ű_3ywv@"/?b+$H !H6̿ciDstQcģx\+c]_t4㐇Jl,Kt^,$lUݿ5!zN+5d09$%h"ӝt+)2!\s5^P/.@;n(Ari2ۛi=E ,=u로iw9@E2nw({JiQv* ֐9˔K>6tPm%,脵%6wyK`!  \ T  A )Zy r"Iغt3r*a:Bu{3>58z=ǃ=u^BER{p5e[ࢢbFOLrF;ɖd䍶 Mw}&u5wӻmw,llbo7w跃}LBĪĪJx)=*'VR 쉥GȲNŪP}\m7$t .2S/g1[KRc|w=hS Ed` @0CjN;6!Y|Z4~[Ytӳ-2v48IΦ(;:N؁^`P~%3(B.0(,z-rBXRKh ۓJHU](25UGY9n+js !EDY)9SDTՑn6#@^>&Sߑ$e/jK%Ufaᭋ(kʝ 'oBj6vc&0NB_*XzLeBb$**Z87nV׊I=X-WGUR<6v`t9:wAIn;䲏lYOn;?Su xdߣA$C- D Eahɼo։5%%֟*ϛMdɱI2 NLWM^B(A526#g?2Uaa3x(6.bGk-/f#duw/Z[5u=1bTkC(lUfud XSK,| tlo.U"gY,P=$`lVDm)1_Ko"er,&Atُa6M[&Ekǂr@AkEPO^dNe,At:+Z_`WAH<t md` Y3Vtd{)F5G2FVFF5۔ 7#g?F& b3x("bcDDD 6@ XD =դ L}-Y W Rq&-e0mlIK2ȄS!R˭9rYGZٌKiCo6JmN+%fhAh0ʃP1x4~ZqCP5C0<|Ni1j=;G%AaApCD? `WҜ]NJ2K3z 1axetT0} eʿؠGqE>F~l2y OP<נ}g GPGUX=K=l$*M:)I4d >F`)SX 61k.ϗqFץ}9&<QjTd62Ll  f d&̕u}4Ҁl< bڎN(s7Q` 1 tVkuJxamhΗ[yR8&GAy|fN/dzZ_q4;(OSO^{ FfV1P'6XXl6I8BV&!! xfYg' !sua|Iv7jJ(J RI{%I 2CFSu'|-o>gF/1&Di46{<D05 $.XEE RLo۪=Th8TehQa?<'H_jU׃>OtY|9+˜Pʚ KIDDS GVd" MBWA$SOKJp 5_NTp9wCӔ_c֙L5:p5Z@tjUlN8G b:4ѯ&@T7JvR{./t9x1J$;`ǩ~|[}&F׼*$~}׸Qŋ<ȧTd~\&>;.fEYO^ImfTVڬbVjMFOC/7'Qff??͋SfL)L Sq$k)=87p:d \:^SU2Մ. * HU#Me`O3XqysBxyu(I,Nx,Q?r<òլ"zaducgm FG(XOsuܳV*4xۜe~W ݨ۱w1)1{Ŵ@Wg˛H8^m}vryrmH?4>OpQߤݿhVizzX:Swt deM;{Ml|A4zΝxqaI~zLO`U/l}L)T_QS[A ew}C_/y>5E*Q^i+ygX˨gFomoWBWoNV{w"֗k:Wuof[4mw,`IvTxQrCbY5_piz^nsތ׍7'zwK?q 'aרs{zy5w&ڽp\ϻZ魧s>k"x9&zz{N,/F|rFȦkd{b緁h7Y٭7ZǸ\8}7W zk]3u=#OKjc1ydtwǭ7T_}_Q.X[w ;{&ǜ҆ĵcnX._sFҫwmJ4id,v3v}3;x`O[E1|-!ˎږ-' fw:~,V- 7@u\'-7y$ed/W "Dl2axke\U?܉Z\(ڶݨyLK2I)s&RH] g}_=վ'OW~QC/})>tV:mЦEZw\y=,O}9 mɺ_ ~V֢~J/h'Zԕj>]w/(CVuF[P2\ȍg?1[OFD]iI8F]!ǢZI+頮Cuef\V/!մ ; ni"#i8L@Hb6_m8/Y`(!{^.gt~qIB5lMP`Qd6OŻq8lƶCAQ.o^޹2J$F ^z0n\$og67!s]eipYŴLOFP 6D%xżɝ-NbNl%ͥ GH#$;fc0w\؅QAfYP]F'CԼm66}#uւ֔z]ІH \T' !jpTȬӮӯ|J9SAm#I.oqp@838-.C\!AzTA{5ܼ>N;XJG w9xq!$[QsVoP/':iX'=Fc&fsj E Bnl"oM) Uؼ$I+(*OZ;@.fZ{SaTU0LB0{Z *fk=H[m5e>'zq(Vhז,A<|&%LBh*jDŽvccs9i1xcІf?Gc}>Cn+g`ߣtȆɱwڄMʪ7 a$r\0gv 61Ga~g93"[O%IYz c (~~8(z@(kUD,qE, #d62)"&ʈm$J&J"Xkӡ(!8Bg'rvFf#B{+`:.1d"w(uBt^: b0pF7x(d vڲIedO4! ۅu 'ĻW):Ah\zBRqVp֤R\0?sh85,AR8p镵2H<Hs*mp`b{k=0\TF%1U)G $ 87@EiMq{t%ҋgj|HjB㲃p&m(:Hb 'ΦܠD$aeP7qqyaw/J]e׍/8YQ-.IZQm$\][{*=Y> wJ88 ɨ@4Ĩ%CPhG5xN> Nؤ's=N f+jsb*sy\T{Yʟ Iݬq@GW#?t4SwtT)˼4^,矏\,zAVXH%.TQF_Z ) MOhxhϭ^C{n5b{ -CD+D$y<;Mc%R$Whe&P\v1Zeָ-yVH R7rv^>$dhn ZPџK~ÍWЍ3W}vk\zLa7ÀW FWgƦ)Y^5G'NZ[Zq`,XpRк'Jފ{7cy~ X>1'b`z "E".MpV3 ֎LRyH*c@}3)9hϙ"j&  13Y΢D"<63H8l>2Jtp- [v6jj] ލnnGly4O{;/,y&ApQGCSxEb$+Qh0,rX>hhˇWqP ߏbHb"FjR '0),CsT9*T,YN|;$ H=Efq Dt\7|\ǡ=Yw`"Z6/S~|>q+s1xXD2E*r%4^L*sZ@ I0N3hoM^&/AxJ$6& 4yE(K!j`\P ͓'K?_)^Ӡ=#'*y@8 K,X HO{ ;7?Ț#^2+8d0tQ7E}za^Cgnmaȱ/o\ >LZj)`oA zqI^A\&R V8 1cdʌ 6Y/2EDCi 8A™@u;W ݽ^zHRmT4Jђ+ХVĔHw7oulN!I9sqcUl:jgϊ/*j爴%%ΗV V2i)v1M)t9V,oz>{I:5@ǥ_PG-ߧ1:yĎpΗ!Mr9;;qQͅΑ,첰!xOÛ_w/Ź|Xn~3hcSI9ФhJPؼgFMo}v2ve:/>۳ q1BFx"%# /ޚlϪsX#0)r-,-#丆,rr`syc&ҟbLJ!mo$ɽY^AV}y'qS bCRkKNWY )>kf= ʌCAوC^E~9pϡf]&dWmW<U@BP];;樴~*Dy:'; آxӳS({<E'<-N}!h^FRԟ?IW.ZBNE8nߟOl^-b1^VQ M*ʨf imήs˛ D \ڜ.jm{,tb%j^ċ㖽S=OyNQ.gϕ,srtTMFT9PRS 6MbnX_>P;9;?OMp=KT{w-OD\}, e||丶U;n %qO~ b:kʓM$ȎnNM8(Gb:FU7iu}~634A£vn:.jhծ2"c)ܘUssAFo}MIꑵ yN|I qQŢ˹H`׭wy<C;kj~+0o aL39 J'^)Nס,lD?OUI\^~~↲kC#nS-LS10]SqE粼XXcg^l p6h vlp$+w#IB_/K#KH`s%YjK+Ԓg~3C")SMidLYU]Ouu=ko}zS2ڛNᵝQt?{,6zMh(3i-#]|ܩ.B]-1\>ywy'"(gQ"y RqlYACHԘڀp7.` __i, b/ ,LZ@6 r2"s& %7@̄D%khU1zeTjsc³rYv/ W/pM+\SSP*55>/0xZ.\qW{Q1,q޸$lV -QD-ρQ1>i;"|fqkot.вr'W πYl,<ϾLo'An }6BYwt @*xi{ھNDuD敓DVs5mﻶ`|Cgٛuy0_Y$XmeL J5?-q"a֣ LȵN1-ɚpG.96̝(m12Ҙ['lG} |3엩+|uh;7v!NPX(l)3"Jyv dQZ]1!R˵VR{Jisd_oxm^1M~ËooG|4:IEG^8JqPLvZK".@WȒdqD1#+J>PVr2^)\*W,k#9;*(]0o\6(BJtJsΐ5E#_aa2G$ ܰ_?yKrֆ1l>Ez>ǩ18M+b>)b񘖘t5]l Ϩ5& K]rm &8 ~ARKLʢJ!,$uơы @e뺟#nϮ;rGQo7V쨄δ5B7{S//{T:CyLoKϴCxo BN!Do.h>ΩqtE ޢS[r@f/ٮwY_fyݩսg_#z܀`Qn#|"d 3}V ,i#ҍMQ4PߝazRj#|RJ3 @i(bPhydVz1 <RL%5',BjWhL {D2Z,w2Cp,%g29qIN*>]N8M0u yI3ڙxHsixe XHkuQrx-B%RNZ96'X_dQe+wo ))RIkR0XYQ]Q]9M-y'mѓOCPm<Ӌ.jM ]LXBs=#sRV b9d}~?da Vva܅"1 Yɠi=ZP2jkrJ?ҩMYrAV]y.U:!sYfEQwVnX^rn0`)%mK,`bVXϤg 4,)w֫xxS^&nR1BHѨRn*: !M'NF{L8'y?.*a 1D@m=>P#ǙN^yߨ,$E X(@ZO=eG %,O=gupqPC@jCp\ѐ"K`>z FcQ:f7/?cE'Dif%YE]HH =37=7 zj;~al[\RmL~lh3i -ϳZtJEKy.q@]1k}X/}· re<4[Zzȍor92"{ )~u:0[I?,]Lb(hN-̥dwоoXPO8#jI3*9D-;).]1׺$'K)Z!{(W䫜5ZV@9d-y6u==kȕ_.-/ݏ㮫aE[CɩLKY;%[{v]/!Jȍ-f-`n}5.ZR jPKk{bK63Nl; \&޿MYWwӑߣk-7χRn}˻c1W|9S7Գ|K}Y[L<ɅN|ep]f̙;t>>I>^#mOi+;H[oeߖ|An3Lq6O "l\>p|ueѕ&^ӕLCqBF6mAs.)ϥ$O?sq9.Xr:|I|\}y4 + /E-dKz7! j.bO1u!@<Mp݈sn:@ -zSNѰl 7N eJshry+P KbFH|d8@tr~zҲp2?#𝄼;?\ZN rScxL,dTG h+m!܁l{I;ku \' /#zWeN9ԇ}S{i 6nb6v?2ڞ(j}%-r78u5*2H%pErlc *w2TBr-iTOUࠪ\r'ǓiYp4nUoX ܫ1)5ӧ]׼ƲNKkųC|{][c'gC=7=2 eN"'GH) s@FަnĶg}@wv&v0Oa! Q@g-Px.EZ!-Pjxܨ*6|sU٘"y6HkU\@skٻEM=94-f+A3g8\KvCк8' ǼhP|% Qg8?:D hH=MWurI[r9R}nm<8xŮ10oݷ~+~aEGlԢ_h^;IZ1sY:f]0Fp%Z%uʱ|vʎJ<mr<:89=v{oZYNß'xPaB],J .<葚l\GJy-)"j1!n1 86՚!pȣwDʘ]QZ$\%dlb8r3\ F1,cp]t2Ef=(.3c>^+ݺͷ??|{K{̧p~y몧 JbV\|5g_P0)BvK)q +[~"*_v:2I&d% &ѩ G`IB[$Q!IZ InS`apWC]qX1"F RJ%qu΋IY'qJ7 I+qvٹB }Zę\&2x`>eBB_,JJ/d”*wIQRRGo~1ťcai<JIkbN{W }C)6j}Wi{c =T?+N 6g :R;Gtb l9K_KSe}I[&=c 8G`7#WvP6 e6iLOa=pǿ̱{agx由"LɻFYhVR+CܪRiTq'}:hX(I) 6QE]G#' Lh7D%'ثJicTW}ind\,$#W9O[;Dp F2mjP3D歈mst}}/7m~6BawqɘX[9|[67vY,综- r kn<JW9(6vڷrg -6/aJ6J%;u K%e4-*0RJB]Į O={ؼYmyhVlb[׈tM#^}:-fLEo7ߘ ، }˒OEޢ}I}׭?79AwKZjo<<<[O]+Q9{eVؠK. .#= uy3F.*EͼBHQc9F{'INESXw󞳷W"t~6e) WwcUxb775y|U ٜI5C7pG`!} XpmbVAUhO3JL$FH)DDڌt5fFAl]IYN4ȥЪY[pW vZDɁ̹#B/H 8RPoыd)OJ{oSyMG2Oȅƴh2RӥpQE?J?=GI%rz# 콂ADQs)Z X QHW)\0TQUzQB C,Rs9lYt\K[>u'O~<>|nď=ٻ%/rWoBEƛ,},@>%=jZ2bkowۈb,+ ٻ$; Hs)a-K+sZ%bCXr\m۳?KQ;,QC.GH K bL-.q1Qw_{ i/ֿ8:l`s 8BSZ-d`1(YSm(naCTSdjl{}u븲=TiDrR* %=O3_B>-ID>yAR9jͦUs0JR!_!9Mʓ.sut)7O[``Vv$)@fT!f֜e=$o[*ZݏG19W%%`у7_?삒ݾ݋xmLn"B#bhHqӴ23T`&k`s\*m>G1ԩHTXl`Q-taU,f(^*G,j 5s72*Ӱpx(NbbEN? flvn'!l}9G2ve0t }Z['RiGz(YhXc5E@({FcTT[L9;e9w6(/8m|<a2"Š+"X\9{~tqՂ8GH!fQCQ#0( &-TR RcN>A, jmgsnu5\7NHzɧJX:dse)(Hɭ2mңg͙V3c+?_Iꊋ \tj3hZ,N "&⛬EΏQ伝)qxPo:o~)xϦw})O8B &gAERue]!:B# WDxDc|324|{ t)%qjD Cw4豎SsSs~_MO3;l(|>1&ߣѻqÜ+?ȶ._4"/x**RF1J F?ֹM`>Pʶ_9NV>y{ѣm,|dd0NvZ& q3gʟ6O&>T嚝8ޚxR9x]}JZ$|T}R$ Ul+~,QHI_~YL?GlFKOQACI&+%_dq6ͯ.U(kiqSZzggn2m&7ҭopISXљ=c_8@ZM\IX#E7)U JaьV(øU'U-Hl:!+Z1|Յ!tYd'Gsr㸘qԀ|mry47w#G˦(0VϾ͝?-%6da3Y4jn_70X43L!HU5w(8)4[]:b -t=CϦD2ڸ6o߳7'* Pu!U`uŠz|z?l -ުŴiä\̗5E[ϝv - U?hö1nv4K4oo(#|"cw:Y_JkPP6Pփ Sc>Cjଅ# n ! "KKDHpB{ 1""4pg"8 E>iW[}ۮ\yprWMQR}{+}Xi4BJWRT;;[u!/=9jFƹDL +"q.'(aSQcw·ǘbq:SHxG,[3wS+wQm&0o\) {F5VJ3ɐXlo",SͭiUDES@؄ A2 ǨU1kQRk ۜR5m' j ΆA~K6퐆b]|\6zªzs~DڱWei7FbFVFBlD8Y\`8 ALsP­Rii !',uZ>(m4MHon?B1!2.Or¥YfYMRG$>HWF'TDE߫H𕄲t>ir<JrO@^z2d=xٖ"@:0^h8QU]}ͦrJ> emt{en@Vm't$q0Ic[Pԧ*6_ٛ\f^77TTܸs|M&kګ>Q Fjo-ϪG.7oV׷nj^iDINk`UU)0@6I`U.:L˧#3dK!sOC 8o[oM,'[fZ[cݶ1ҵ";؞vҾ-`K]Z-H-W`.pAFG2"$`CÔXzC r}`twׇrެz^i=cHOAlP;x!$aHH8ʰ**`FB+EW[YJ]9Hy%}b[]G2sze]*xD걩7w MQ^aDLkb΂堪bM4=i;|NUvNUOR(~Mvݪ&I!zBUC .w-sw`{~bv6Zʄ2a^!X 4]͍Qm)mV;|a&t ڿ2RC7>n['*́T9~ޥ|WIspYLu\'/Oʴ閗')_]Ru|I%{fO)#,< |VE'TXިa@&dE:qqΊn_0zr^'U+|PA-$ C+ܾrÂYv[xSj7{gUm/3+?/ꁪf2|gq=kW${Ǭ[^Lzg59 +:`65Tl?/j?K Ish[V|R@x ,g\FsG8=P✎G 5D ɡOwrs yfU(`V($Z$3GCkMp :7 #$ Tj4vCP q;-RLp7ͻs<,Unf8@G{Y,xLuRiR:I@a XL9N,%/ +;+%,*@` ;.h(IPN)F9rVnA'P;yqe,:u~ߋ!S<`s-:2Az4Dľwu=?v6nMjva_)kLA.PHx3=tH= NVOW%Oʇ2-ez6/Xކl4I>]q,m`OHb|e \p|cЗcxi"Yj3vul=b n \]d[3]o]oҀ sY9UnkznށD=ޟy&,͹`2l:-HzK 3|%gS4a!s)}r_T5&QvYWGjd~sVk/I/Rkn =Q3ϾY]#؜OTmVڭt$RiQؓ!ޛ=%Du=)j{:°'z kJzCW .}Vu>JSiV4߬(;EŰgs3vy)M;3NU t71 8yR^l]=ell7x} ɿ~R ^)%Ăpwt W2s +]mCSˍ4ʒa2s̀ONElcnE,GZQN)93Z&zGOP0a jVQ.fyaD* HeoXra眞odl6YzL;<9SűF/ZSIz`eoF9KhrP6A9rSt >4]mX uX.=lƯJBW[$EtU;.CO@B0&7tUB+d*T|#+4>cvɡn WHtʆ=D+#b"*/tJuJ(5銃/dU+{ ZH*lWGIW FdJ ]%J-GtP2ҕ$I#d0/tۮJ:BJ%]`to*1' F]#]z#rl^ラtس?TXpo8Q-8Vaj;J:v>+<ծC"07tUBp PF@WCWh*V7tp}ʮUB@WGHW1Ad 7Kt\*tP26UZ`UP_*լtv412#GUKH_*etP 2U*qJpPFLh9:]%x$D㨭-Jp% ]Z NW %ոq498vjM>| OhaL]/&EѴ. `ZXK>\d64Yrm˃w4'7:Ivnq螏WT r2_L2/b6؋B(kB.0pf tP[@oU6ɘ?Mmĥ%_aAcW DL8fo&fWaMS=E|aGxEsʋ^z=[@K~3 `So1鯰045&Dtg쓆UԳ_d= eOyѨx٫_`%<󳫏ɤc/]lEORe|M"L -BSX_bpj]9*;Ny!R ^JIj=ӍY^/" "NMq8tC76 )IyRo4wߵvR֤ˠ3H ^`ehp{C$ׄ"&pHcD_RSqw!nyFrEy7坧 7>+?F;sl 5__{8Y(3q8nW{{~ 7@%78Er~GEϮ#Vi Mi+%#Zq`@isYQZb`ѵ pDc?0WdA2u3@aIqؠj2:4\ ˚M{r1Q%dMYneul QY^(FEYU`Ōi A§VS+aOjSMM'yQ _lЂ>~PpP,8z>+yY>lt6e(>~G/b2Z+h`J&ZÕ:JF^4?K]Y?RҹWG\::g"8H*}f;DM7,E1sM?$c7N?Ѧ'K<|yOWRd:Gbj+ xWd J"c+@$tut5'%DW ]!\FR+DEt(tut%&DW؈d  ]!ADY3+iI6$vDIt(s0xtJp]!\!S+Dk_e:GFq]!`)++D?Ȁ(tutem֬Wd Z}0%DΑ,іd(gy,U1!<׺8nY.w7^iđ-ƺ3 X%gBZ>f$L*>,vQ{w*F!5e>,W Ak8 Dtern+,(IN{U4&'R&EW Apee6vBe:ChV5" QR JR,?]`y2tpRЕD Jl3+EP)\%CWWT Ft(UlU2]= ]ilfn+,N~; Nf;1iQ!9ЕaXԎM:ed@/#Jϑ,dZT fmbl$=OsH`O'ԕ1df6 C[8@W"ծ\X]Q.LNWLWgHWAM&&BLNWLWgHW\*TBtJbО&ofQLWCWB0tBt•=UM0f:CB)]`}zyapY2thEKRLWgHWJDBt%мPtpE2KVEԎ(u+ DNWLWgHWBsKS µ"ZS%*ҕԮ]:t MMf rbb+DIR9ҕF]`#L2tpU25 J*HJty: :Bʜ Di3]!]E+n: U/# %]Y%c x2~R3 ʉ5) 2Y񌦟T]IiV~Rq jMCdI|]I;<jw @"fn291T%.Prb3zy:}qj +O\$ :Q _jƩRvV=rm>E30t+ L YY%{a 6f{`Gwlw饏Ap)su ݏ;#v}yN'7Ia]E;"T*;t|wvPǾUh8i/Atqd2?QZ?#Ia0+;tywHoOWiszmVkٽf0 7/m~m^iqjXKv6kf6axݫ 椖]6L mCUi3߸7377KGkjHQu% լW?xr"nKD3lf}L _LGzB0duxFX x؎v@"Aђ3&AC|n 6^c6Spo+ҦWfr[F,h@;{ 't2e>Gk2Ws9X]q}n}w՞Myw2"mQ +D%ShnJm_4owmoa y>! g)a(ʪ KAOkw?#5Zcb6;OJ]9SʽG|> 3]i<)妇R|q}I3L4q^1>dP:<7. pun0H͚g58Q?[ ^n0/ uj6'L֧F/j03U1UikPhX:S2rxp}5.r~p[?Ls kv05AOnpzʮ5g~5 -q e`*Vc.ogt0qkZ.?nYX1W- +KVL +(ZU-o[2el(a 9Αq8UuߔvPSR/MzF_/@Dŷ/X^bp*Ħ|cP+$O{\%V&%o8I K_-!<-%g݇j%Hf%>=:X?~α`M1&'d

r3O AN6SprVFpg}lzjP (jhstBp%LWÁRXl^l1cb1.3.~wyɋ*\b}Aa֢E820M7#cu BS:I1~A"fC 3 H)"d\)xn eݰ,d ƟMꇆ t|%g$<LjUwrC4ky wЀ [iԁj=m;RUO[CPdēmp):^M$gx˘l4n{3d[G؍:cǮ͡'5xYj|eԒ2":koqF `jLӘ,2U, HpU =';$̣촙JBmFEl AZ6f-' GF`ĭH$zJ5sfKQɂg@al2מrM0d& *݂0bD2dm 46e}]^'7AP0*}pe_aVb7/ aR=SAkGnŒYnW#.kΥn kdW4i9mՇC̚% >|{~[g#vq-櫞ڒnI!)SC#&B훲GyĪ:\K8\.! rT22gcFZY4>k6k"ϖ.= &EYqr4ivrtrluqǜ)աzpr}$}Vd;R2NɑBcXC!ާؠQ{k{{f՚hկÍ4e5&>od]k˺խ5A퍿h/ vL}DAnp>WKsrlßYˁLnٹ^0_ON#,q2ތh`JJހb5%*HIѭO:??荘}ݩu[%[p[ml~zG߼u9-t`D7yąafM2KA&+x>d9mbϜ{f# S'RSxң\,li9ܴqzNnb#YCL{`BG :k7e'GJgI\dKZ.!*=ӠdǓId3&QkΞdge1= u*C =SP%sԩ)\fMgs> v„Ug+EV XOf ,y.n?5H**O3 ga2=! >@3\{gǙJS2y0p3V6ʣ..=SjaX>Od;"PaA'x^U;\ R$0jg)/=kdÛꙘ"]ZM0:36ZLr+~Niԗ5e&󁣸#?YT:J?'&Q>gtB Dn^Zڨ 06WIҳ Ѳ!4Oq{6ay*@Scv'>-\\|VP4^cfyG1 PHe\ 6I*_fDzHMz#F@Ms;5Z+4^1g8 uch*rMKVcrqy!Ʉ A9S!gD ݨ 4w-P$hv 3]9zK[9~վXN/uvJOW󡨮(QRe!{A*bEjpce_z܋ 7ځ;#x"X{?14k+qJwU3ۍ=$J#VKvYʔI$G .=|ϟ-.%,o|f\X6S0q*S"`PDI" #1F9m41~ co$\W?2\sTenBV:tV:Xȇ|4Nmij>ZC<2/;9;Wf E41Qd J=)Ib2*AJ8YIED]Wjj2bUJёJٰsW}WVg,Pe<ѕmuhZw3Rx%TFrU\ M-b Dcm_]|-Tu&XGd%ZQN$i ;qKkTN&HvsK-2q/RxVcZH ۇYΕ(֧u֑M Y -XSضuiߜR=v>9䲼C}_ YF nS%ʃ1IKĩ *daxXҗ%3UW<8.9)K&KhDZ!P<Ψc}ܾ"z[xQj1wMWavj-vv'PS> ^lʀʦ J@~dٔ-fMeXB"rk7%:1S.H-ST]8EY;Y-2rv1]iʀ,2,}\'1eTˣ[1zΤ ZaoS7MSȀeA' q;m}_68:ӇX坄ez-]zplY ^Z}֒S\גM#n:\ʞ]-a_Tᱮ`Vz]Ja.bVj6T`RȮe͂޵j;%;lJSwЯ%Juw-ޝFꈯ٠Y-"y1A+U<޶-$~dF0# q"%Gc"*cscP)b۝qklZaHWZ""ւ./vYݠJ}ܾ"3uR@݇ĸa{5-i ;PO;^=g9g$װe-Xn$ M'ݦjWd+[XZ^plY8b, gJeJ)4 Hγ"aͱHmǶ* xc RO)2烋Eg4;}fw&ZZ:tg6VTZX =%B 0\~"YZ;!T.B$^?#)non Yhahfq`w6ϗph DP.~]a4WVg&=|Xږ HlLO_>|IƉR(7?sit 2L0,ǯ~ҏ0%&jk_+%l=@>M ,p֕bZᬿF`H[!&3IC}ز~AtѮeʰeO5*]R.l@"5b';Y-JE6v%1̡_OLYgl_QoIJ)^6h_z0].)\4:;_|73PRTf$3?v1la^z&'cg+futgn$PG wv: B8/ogI~`-+39Br[%+9ޔc1FFNr V4X~,.cӮeŽ̒-r@بA燁UG9!DR/J̨zOnͫo@LXi|s:tXˊdv(a]q*i=O#t^1dd̑ Oe2+%Fd?cf **QNFL"_1|祸\~F_0p3e\$jupy:UONDaBVe,7~oGA`F̰%uhzXB,$&C~PfEwN$5kʦ734Ov4Uv B9&nҖmC@i:Bnr٬ XHVu&E8t!)땜ЇY#p~0ny(wz8 Ʃϋ7O_k.P~)}[8O =*ǃ!xFO??&G`#c`"6XSQ7V3=y+|ѕ՗1Җ~\6!+s \Pa; yq]RFiN?DH~w 9WN*>s;;H#̿Y1dxoj?+qGko'!s^ha;Pq$amb*?p+E4% -y&{93*V#oymO%ƾi~dqG֠pg ⺋ xGǖRgge|ǹK%ৄ_qkw"ދ?;v OKl͟J?3i)eHg8?q;΅7h޳> 33 T Dt"aߊP7'$8X£ Z&_BI Fp" #*~[1?#!MܹS7_VuQ7BD}Q@!7KSyzg|D벒 O5T @/NEWwIߞz%4%ib0 _G8R1G8!0n|z?N*ڏyfT*o?YJ,FҜ!JK']>yF(wVp;(gY|@H"-&:2?|[[vG][ s( 2v9%a@%T6UN'SX&i{HSp)5&SE1<$؁`A: ,@$ƸzͮG?b.bTokd>D(hG鄹g .u qXa/rYLye<+6E86fs5s7+c6ɐU2ϿDd_J xXڠSJ%:-pFvbEU"b?==h72i+J|EP_75~w=L_4ߊ0dq|)> 50>1 ? Z`-zb0)ϑ@Gdk.NE˯6W*HݫITcͯ۳lEK5$M^6,,4bxP|cҏ\ D^.Nib^y¦(=xA$tϘoj]C]n]Ϥ7Kó5A\IS5v(ٲwA_\yYryOd뗍ѱ^ /}[c`rehV$w٩I:.5b{VwRe/ĴT--7Y`J`/+'.poXe &ʠzuN}'I$:SM }m~lNje'b:'BA%St\KuN,ZwXwnF7T~(PBX'⦱[&j-Pdagm1B.ø![v[sK$]HIi #TB;=||2ma&'R4iJE uԼ8[}"J=b hl(Tiv6#!MiyXQAXٷBLۆ(ǥՅ:hO(B)@[Qp,#,XDp0¶Gl W'<.>" ʂ*E(A1rD( l۬ \$q 7<@ݨʵܵg[Emrý/:Z1L%6EZs`Gsr뮩-[%G0EIOG{yhEi-B n&ϼEQ\;,)~@s\ cNh[L.Agnyr7o .&HIp&j KY|W^lyq^W#:$پ(V RX4; N$"z !;X":ˊ:+q7Gp˶b-\ IHLPJwBCH}.yCVS.+rzMqp %/@hW{".ByBJ,y| (+z-t6熠d&_0KRz:/ؖ_5`dx:ȗEIi%)Dy$Rt]]Sx8z*ܢKМ RК½O_hq: 2DpIᅳV϶ˆK, t_!C3 ֚;W|Y{t6S`Bwzo~Vͮ8wL.p_c9F@+~pIДP\wGO=d)oN`sk|0Nɝ}1̸s~^Fb]bK0oawm.΀T/ntR Zn /`:} BIvoń$)K5q 6(L( Ɩ9T*WJ3!vp"x|RbK8Oaens!,G%X'eaP 9%8"[5!u`{G0 ޾(u1[D\N 4wƚ'Qt,bXC=*$EJC+Ң}kTJ[?!sXFW.9놈m|ۻfrW4/,O">H׾LƳ?95yv}-[^P]Yo j]I-cu8ހyN]IT%Šhtb_]D4UҀB ȣ]DnkB$u+g{UœH֞2tnAg$zy'*s !)pRGVv!sa$Xm%I(ط_/fa?ߍé'@EF|P 'SӪ A?m ]΁,e~(]^w<K61Ws1\_4=<;O# >. ~o77Q)8pgI9[xf0kf2~X!f4Z0f{)e4r WdNFW\:2\qݵ<X)49d jr[T&W= Dunjte-S1J--#]CƘSbsܳa2黛aD0M_Pz#oi_d8rg9_\?끓03>>yWŌ_IfI&_3A#&t#/Zg+zTZ[ՒQ?W*I%)^w:t(gäcBV&&ȌO1-!BHPwZtP%I0Ѣ@}oBbU.x-M * ";'f߇a0On]{(obkb?V^#o_a<|7KPէ>,B_gS辉 9>SXi)FW6j*l ( $/ ,aiKmTY`+IuIǐٖU{(@*% ha 0_ M FRDsK^RM!+y:g k.EЁᨢkrDhgsI}iW m8D .4 -Am{77+&cH;ny/dۻc`,0" @ʎGi.Dˍ7޵m4< |N `4NGMl:$}-P;$=#:h p> XmE  wb7 ь0eꔄ lAEvljI@^qMug5@!"ˋmN;:o).ofL@|{)o#Pusi!R%IY5hkl O̙ }HqpݿQcoTBWZA)Mw/ B֍Zrps+HT_UZ}{P¹T&ɘdl4/C9'ibaS!al6*2%p%ғsͫ+6Қ/k 0w'Gl(k\ 'hW11 )f4j͝,'3ᮈ0L`yir`OIAH܎8m0=`, ':4jyXQ^sRiq20="pP:ЛBjYXh`t90㉃5zli"w<2Q*Ӈ.hWg;g[ ywf:)|G7@TN%r%*-'ײgo`ڤp ʂ*E C@m*=&8o(˽"Pf Qֺ,AEm#|+28%rVi+l mz8~i8wNݽNT6qyZ@R{ &6w8+77ȖPF}Y%gQ͡(.xuy@B.f:VnM_:ib4:G [};YaJ$R y!$qPxإcn^otx1prɩL/ C \W5Rnaۆ/HdU ᫭*[9[:8ފoҪ@-Ora4tiCTuBO,/ ¸P.vz&}–{6x.Nk/ .MheRnqxѾ88;w= =+-<ch&T%#eL89??%P TU|Y,hp{RPȯKVhqhs1cm8S&0F)˜*K"Ư/8U))49<꺬ɖPb0VnG n9cܲe.nЕ 8) bܹzs8\kbbbr.J 4ZV[z'8ui|PPe2g _%dhM$`ޣl{7BD`uf1(n?PlooǘoBI7ݳYYsxb2>؇>a\g&-%E1kʻCkb*SYNJܷi`˙?˔3`6(.4.s*j[FtkC^>}x2;F K%Ǫ J42A͏uš<(EtA2ݹUԳC'~Bkwv0m:OiF*Ư]ޓ=|ğ )jspʽJsvV"qox&ms/y\5fMyruWb߯Jg^Z@yd[~= .}=!p<qcAc 4& fQ6] >"$\ (k6Ǚn1ăRZ ?{۶Ѧ 39$E\-əeHw$YHuTe^r9mbݭ ~j؞v%`C+R^IlaT2}wNB2ViLq87eh8̀Y; +n*~۩A銻ϋIZT5ΨY * 6lЏN']򣶫y^ܮL d &{#nf&yK=/2. UdI;do%~m(rI y{. OjjjaBY Ufcv0 ˸\y)k׆m&ðџH޴+xb'i Z9 oڰ-13dvo0Wq bKHss+ʸJ dB&:Ǩ(AڻTVegÅq `;U6*EW9v jq O̞M /.:Ctn/XI_Z]S:91SE|4G S]cql~?_rJ1?F[1Jxk<&S)?Mzǽ{(?Ffhz9 ~; 鍍NEcoA'GopbMk&Zo? ;Oo:[9!TX$Mے{Fh+c$Nm?W8TOomx GVGML90-ބ-1d[3+$B4jm YfyTUSphnJ +q9z mUN6~Z; k1'R8t2d >O_ uo󜷤BFܻ*S `G 8nrȻI 26\l8 7kq&Pph}='"j*u ЋX hL:3IQn%\Sd/K-I* Nrƭԩ֒þO.,u%*Df5YtܮLJ\Zcx>,qRǘ#:„JJà*Oێ=Rʹp=nF^b0ȼ*P0Uk"SdJ6d id踂+zoXױqG3[c8eO5$#9<nUhӛ\"O!!x I %հ16^Y%fЛxS~6!Az6 -칼2b+Hdz2JG 4h G 0O5"6#;v0cdG9גPp{H@Q)DNV{f?2P5^vA VOMD>._ 0m'dLh>MTcudw0mKpn\`8}H]ps&rGox:N7wtį{q zpZCls6DXmŽa{%d%z=NNp{ "8RMiFE[x+ڨ3 !v:UjxTN,xrmu0;nZ|#Jab?Ʀ:EwsZ2F*w9#"Ȯ:c‡?ZVX !o|a&urSy}v qH˕SYw[]xD Edc7]/kF]1Hܘ] uO!W=`%9cy% r:y{XsE [vS ddw: `+RCg|YS4K5N0zk$HEɏ2V0| 8f}%gqv>cs>'nP` %%Z Eji"`[T clThZr9BmJ$to3(t-EJvFUzYV* Z !Q)Bj͞M)uꉰO 6)i|u'x;Z^,h߫FcDTX$acqђ88Pc>f;Y9EV:rJHp>$zt ;fڧ);MQNmԥՁD fḁOబ8] X_MGP kz\'eAH÷$H\ Œ8XN_}4yǚ.HxH2IOZs^Y UΤ; `]- 7Rck1*.?Fѷ?&qfyzq'zj3pGlVū=uK0>Krǧk{,rUF%סHkƻ?+e͇mY\$c>bBG-| .cG\zM9( ;P[N9KQ#RDwHUV†$桉 η= vEBZ/W*\eVcRZO\.9&ԾzR@kH|-إ1ԍfk~Z%cit&s2oo4.$Ihs(m࿍ڕ`bǐ 8PX)r!B͘i 3}_,ME*|̖N8ki$]UVqRЋvL.5uT9%?RP/KJzdz|׀g8XSTFevIx4f(uY@7$DT-?OyCUquyj71.; e_6R$) ex_ҹg2ugw/POw~2Bhȣ,wK`y,g'7^? )ZJ?,%Q{ ?ѷ??E_F>hưJ'󰘋_-Lq?D_cFO?ʯ`5^'[r,/}|5[vL_j:4hq.hziX0zèFG4rc_ :p)z.S-⸶&b}ow2w hɮbIVVr4~ڡ m"Zé/FP[d4Y' *lf}6+ &M$$t3Z՜ @^xeցw!5R)ܠ|P:M*IÚ1T=~;WM5q4}*az\k2.SWD:oǩJv͞KO d\-O/í~Xc '2Ӄg{YNN39x+}L$:b[5L1L >{*QL)Sqܷ˕0\?=)SeƸQQB9aBd0gwSŴ[^M>mN ]Og,HR%TN ROnA0n{mi-믪תC(ȩa@y(_7ywŬh8$i_BѸkQ=0wa@I0'NஈMQIIcFȌDfp 'L ytEa”㞪IƧMwfX O}2:7<*VmX-m+XPv;L'((hKd Li.SZ .1r\{2=tG>7I 26\891~AIg(87!"$>C5@3Mc 6 P d\.)ZH}b^M!YcsY,m"k_YE:zW[`n\S袴5Eic)N͵GBH]ͱjPZJڦ\XqLL ^%`a:|iS1u[Ig6(@`S9'3P#I; FDoN`"(PMzoJD~ 4f%b76;_* m={Sm Z^̩m)8gLFt> .(qidl7[yܠh}!CۏsBuKQf刘e:ℳ #S *4As8̫Ffa'32IK*Lu$=wOS-ɯM6,1?&k&71؅^p onVH8v"hfYMjS QFq$aFbI76K>s+ 8򁖐&N/ƽ%e[oDh2lɴc3-KRPa*@W 备^\'9iHL5T Y-ca7!ԝ2.SqNtC>z'(nLQGsp˧~^8FT*R'+ d!787eh8sȅ*򌼞{U7ȭ9w&LN\2襕Hr.}ȼ/B:)YKXQϧ,ȏڮ =c۬yP"x.q~g ܲgB"dBAַِ&&8e9gMЖ2VK"( T@3>;NBP*]@Pco݇ #/m,^m[;E{Xw $QvԗwNjjјƧN$p1m48ڔ*y1Řc){q[PlLaWOQ;B(;&cXeH&l8(͜KSAľ*O !u,M;/ҊmYWP (p-{O *#)+YI>tRl)pCbă3WGs`>5`Y6.r XwC#0c2<ݠ7,^? )Zq?xX>#h{3C'x_-7!2shpf3}嗥ȁ39=5MU VOq.Os‹-tD^fah0,;E#-ΎAܧ-T8|TS,FͲS-O@Fj=~3]s-*s59 d\`C+M/EISn'hkHfp'i¢M0}!tб|S.XEDQ)]ÏSQ8t;:Vߵ].\^x!^oKj*qv'K[A2.BkRh[Ƌ.n .5Q٠&tJkݱ Nz7-`rdL%Kgܸ p4Xrd!n`SoOQiMIƳ2.6p(5v;+ݮ?EwRX1"v?5kd=Œ`-CJvH٘`;#. 35,ZP)_һ "U 'o,"n*v0Ӻ8SWw/mp!(`8րѤc#g58j d*DՉIU}ycN]b?{6 =3~ ${0`v<$m%dKx0}e٦%6ٴ,CؖYdW._}ռ nYNy?!bvd-N+ܺ7I+Qx|]:=W9\c(! y g1}>}? wZhd>@AA?~&6>FWU&9DWGûÍ `02i<%˜HV]`'yQQ4jL.EI 2İY $ %2 )/GiQC]@62lOe 4)tzzG RTj&R>ƻ~pm~>㴄OY#X>duf/u1Jrisf}c?t87}w9\.* Rӣ㋫5; ")*mM"XiTft3E8pNa҈ʬ,8,8I,*Jp2$"ܠL10$Y"pc)޾D5?>A2LΖx+;fg&myjY_:ܴ$ﮔ+x"<""[Z9_<됤D߄iPF$)'L G6szw=upS65TPշElksI}yPtӗw:xuR(t17la T%i@t{rv}cKvqTZdY1J(ף2@Oy6)M 1$7u;ds6NB6-0ҳ)J&cSER+sxS %r6kMRRiDFUfYa:lދ.7tPq_!|FBatVsÌ풗|ueC5AP[[Vq=`|qē ֗/9d[sz dP1l 8XQf)6]gl"Us%7F_^&Edo즱`"\*)iaYDTo#ZJ5Ӆchzs }ou07oQwf^lڅ_K]qȝr~WON+;Xvu`蛤.$\^NGjX#wfT P|}4v 8ܗc`X91XKo5+[Z' pQ㓉h]vHaNH^ةye;^x[X0, ~,yI4bI'-2B1HƩrk&vTWjG_/i~AԎ"`2{}5hgujVim+RGS\,$XscLfƇ !"cҡJj>9hy'(SQH^;N%"ie;M{<%. "ͻnjCrL-!r@ckh217:,wSoۣV᥊eE , )+@"fnDuRZ@OVAohհ\2-e 0NlUy*coGO!bgrjDHz**B J??ON3IT '(ίKX?7l:{ܛ0s>L]5}rj;Mn)yljەYzuӯI6)frL`!6E}=>MLƣyL£ {;8E({f+ϭȮG ÆUDs}/;7<]B9XqQ'pAOQE(.BB0mNGki`-_ޭo1ޥic<s`)LJ@LHDgv8幷1}H=O }۷82/(I}"͜+ٿ#)u,b{WSl ܐ ߉OuL‡|\*ev!j,IQ ge%OoY12NΚUAm|:.ۥt҈Dy+{q ޣH}cwg}S8&E Ǝ>ZRa#s1>vE0cSuzˈJvP߰T2N8V=f8}"k':ed\K F,ŧtzLJ_>vOp߁9LJU'rw 6%\c•N[d4>ct& EV[NOJ[SEnʁ@ !(e=Tj煩X/ƗU+GEyd/zkLWP46NM鮎A9HGJ7Db$,R`=Pƣ[Ɋb;i iHqF4v&ibU(-^$h4[5mUZ8X?; ήAjsݒ>/oQ\f7Ȕߦ |\"o˂J LUW+J%Xsf61K1m[a\ޑ>ȱX_ߘG +lҢli^"F9 9ۄdڑnpxclSwߦ~*Bކ=㗽RKMxveN7RJ2Yp² <N|-2^k4})߬=qG"&ӈKX*=v <*""r`}ZAM(* W2ɺEnQjV՝9(bX)}T{xN\-2Ug1E GP[' 0Òؖݍ4W 8iEDDCOFR"l18ƔR;ѣ?G/Ze)B1 fM ~]|r"ob}t$( NT2|J7J>rr0bhj;`+$ƅ›̕uVvC<_LO"A"nEٺᑉ1%C#GD"e1{) B{3C B{DB6 v ԡtd]YKhX!RLE#v*b7"+1F5gL"y<A :E`GjQp&"c&Kzh#Ήڑ N XƨΆgm4'H4"eB8{(~Blf*ZŋPLR{ [bFy#"tyByeV`qv+,hd^@iKC "w#"lidC52c1R G*M%XDCD?lo5c2cwak1<1m62lbcMl;NJD_jER|CD`u&u򝹃_#z:nաrZ;TNkiP9wsb}V*^m_ ):&"bPJe$N/j֗מּȈ4?~_f}wMήjή=:'T?4_y瞯0s1CNbcB D1Q J Z(lz­}C(uO%Ar,Lj4+P F,cxy /oo$$o<˨{NeZ Rv3x4cj^c{tO](Jph=/H,b'Fc!8[z2)[PJ dV#3E e&oZ౩ MG/ɇOLԐCqB';,1Z\fv1tEJ ZJea¢ Td(dCaL̡ғUrCX a~!cvx6*Zk+MǮc%E( ^=]X$B/̝86qOi?ܒ.fRN/Ng'yI˿<0%#J]Jt]=Vtd0,=`񑢂 !)GI!Y=et`ǨIbtG6Ǣ{e3` !XcK](; "&;P&7N@67 #D2k3 0zd>j8/վ49%J[ѱܾ|b@NLV: 0أ8BӉ-;eo P J̙)9$+Yhp쪀Kᮍ(IV]d!Job<٧tϿHBP2KQRӾ@}_hyWoCar~<>lfI ίgǓ=yH}ůT Ȇ.dxMV9e Zְ9 )B,l[2%"x2i.R H =5?a`eX"K);`-7/z˚k);:xuÙKVa?ocMNыPCqhPk$&Gԟ= w"5}mq{GH eu1elt%m's?50HݭdLO6_<>Xk-Oz0/Td"Njjl6V:(I L"M&dHr1ZԶnYJTC&eR@Oۛ,fv5 Y]? pmZh=ҶZ}k,;iljy14p۵ј_=iiW} ;G%" )[Qb.k!E}~B]A O_=ի}vH~Nb5UEU(%" RPp2ں}ŢgY9;M 3K)(թ]WeaLsSP31~ڃ:AXr pLS!c7˨{Az:ss hLS>^ks[#hM7wHXcW2)(PQ\N:E@A9MľH2,tbrrj@j LSUx9$Wo5SLVR4hsUH܅Ӆ1LW>sYE"]Z{u w#+/7= Z2O/o-}+~szڟ_]Iӫ|e~>;cr<-g_Ϝo֡y"ZSpclRRjgЍP*6v&\s셱AY81wv:/j # OǼZYWKuYWKuf~(Gdwǿo ,>?OD:glņh;wRH7NN*by1sAwMk#NPtu!- Y-NK1tV.PʝJ*& h)r)vɆ1X*HYh$RIHh<+G"ٔY!; Fњ셲Vű2P" nI]&ثWV3l"eQ=EНo6 NC6nMvvi9|km2Ӂ~jiJSDSOMT_BZ>Kx4 Jaw0!@g!(`QE>ٜB @ ~/j~/ߝե ʶҪJ&cR՝_?H+6VȣnL]tfе1L@QP++q+l o~az]V ,X0ݲv5n U$v岇Ghc6Ԝ+_2# eC;^F9A;kl6tG YGh7BU6RSvGh戩"r[[:`'-GhMX{l~={t~:[:oWt.]hS1 lqBq|^7fdE3o߼<`Ux䌮ZfR}um ϘT 0>ktnt `v怩k5M5G .6UrbN*ovdwwl[mO{-j͋G?:+q:rzr'k魴|yk>1_}v<ƨ48V%AycZufE' ΁:)l!^C&'s:J8ET i09N程QE"D2բ%%?KG48:H>2̝A2|G닚w|g^$mڕMݥ)R6x!ؓCEO%TaAC9a7(kj CEch~_߱V/bɇ[/=0rˁX4QK;BnycM# r :%;ЅDRڹ=XvK3Ih`Rʧ.vl)]g><,u(~{w#쌾IhCv[EV|r?]k1*/ .P4.FcKwv~a-"@1IUSnKhp 1!Ƈv/EeőfSmrzҴEz|B[hH1X!Jȗ@qV2OI`jZUyp=k|wȁQ 2whhaƳF0Udl*#F@gVͫ7y΀o6G`b1Mc z tRGZ9RP۝}3 ?Y%'ܬ~uz~<&' ܡ8#Z8l~ {`/9Cq~Cu}r,#҅ڣCwTi~]LqX?QGGw<-q,g5Wyv} ZZ%=vh8r_{,ɩN;wDmX ,ۡ.XD^%X')W+}p"v#YZ }臭{ZU %V  +&rA]nt ԲS֟.Ҵհ(3kow-mHtuxd"Gi7b3>iɖ-ے{EIY*mKuLH$2gӆd< lr+)Ze3-I9`CQ+&]rXآ&̒"~f:VuXc=2. p^>5i0Q4RԋR=]*N`F=Sbs!,%' W,x!7|>F(JY)V{(}nf*)ZmmTLC&zdY-Ӄ~ZB0%e8fn@b%e < өCmyc Q"׾f@x@I(WQ.Sʚ#N$e, J /UX $ك$ssT [/Mk U͖w]gDENK4K0aH[icrېZe9" xu;;+Pa?V1U2ramI{33)+ϗ Mح%oV&DS2 ,K3-Xm#7ݩ"OV1 >n+Xh`$Z''yR,2$X+1$&~@C,I;3$c2ΦBqe]|t7<֍\ѯ.a NtAx"D'g~>^Pod3]~3aFi9zۂto77ˆ$q@9i>ty~Aa(D- `C@((V )3%x`Ϻh6u={Tjr>Gа;th0.zwEfhC>i `ܧnκ@D@]!6$DJ'.7Jief! DD,8K#xԽqiZ\GA&Ûija{!dnZ):GX;MU[ T)L`|dV=ƔZCla+YB \tc!i6 PuqٳilAR uoMtѱ1k,=z,"zٰT=S)-/g+t< weC>yM ; ˥ԤjZ\}JǓ+Y%V{3!Irdn[)A˳4[5/&KܪuFEZXb** 1 v, wO=C4?`lCo]77C$I 68V#/{&W"h. ̝j{C5T v0lS]Dzc(`zB>E<O0?C {M9(t.@ G:%6!I1| ܑ oTRk5q-.1UgctVTO9`={mD-y>'2@Q4rzP =ܽ&^WaƉ4*hI(Az%fW TԈRzIY`FB>buKԛF )0(V1VpF `Bba֔o-T'!: Δs֩j%ܝӒkn g#E8v_P-Z)'KCj MeSE)zN#@._iZ k <퓤9#+SU]y"QeT.lyM7ku} jo&UX,3Mb?-ZkV1zPm|ENL2^׌Rl;NP7Vǐj☁F_h0,Vha{m>2a:k+_g=ar>a؜03o ;ņO9>ןgnɳ8VR^CB''5E'L=q⁉S0 Gh05eŌH3-!GZ w3RsD-lW<㠌{4|xn8rI`#9Mb$Wg@dEch4Y2M}F z+W5-y>ՙG>ћ!wi'3 !Fk[G%P l @L@njo-1F$wܧ@۞Wޮx6/ĮFZ1cޠd:b;K<RZXi/Sa*|DL )Iղc\ū.=qS9J^oz~D7c8p0`=GfD1n)ggZxvYZwKPMD8Bqѡ,vOȑkB 2 wYZ?yI~Z7sS}/?Y 7#/^jx~"qK%bNAo-,Gg(,ğEG" ܬU7fX2_ . i1~Ӯ!$n9'VsYҊb-ݨ| zOw#L1qγ̦ FrtWꆚ%gưa| xc+vLqrb'*P|`=h~72/^!\+֤l: =R|(Ck0^KՋ2vY}w &?_Oܘ߭fzߪI\[C&rϓigY(`쇦'75\Ȃ="`,~vxDyI(vWV3~)X#7Kb̏ba; e)"%8<[=Y,15^з:Lsِ$AvkoE^Ւ`8@M>>/(}9O:%YkI*5L>(u)!:*%䣮.R.С/e?ߪLJVɗk괆塣tV5ڊi3a-din%~3# !/.z.!%S0tM-<3*IʴjR[Цh!]$K#TTtYX b'@=]ϡ1 meMoGȣoOou7^ro>Bݯ7O~OXޟ_xqu,{yĥvqzhsV\5T0uW̻ ݾcJVHa++%m<1Mu4q0Uϓ\'-K'ן YlҤmX@PKF9|sM89}T3e7S1<1N@<RRqaތMG#SE|5V,|Ds3=`ٻ?嬭Î/Wu˛v;=ԛMq NZ`K .֞z^vF/hvVIJ#[Sԭϒ-?]}|폕+ W ŗM)˷z醆Elݔ>B5-砃F0\3l(DR`8tkȗag\s+7j SXϥ])b0d%1dM̱y S ocKJDL⬁ ?2 e[CaN (8gJg[ʎ90kBٷ)X"nȳ98 i'h5zpv/9f(S&}e7ݥs7CqNM釳":s6WuNcY"TEYﭟQ u{m)DN7?vW:?L+_8H@*yŤBHR%{#jU'r, Z5窱d¿xpY A+\ LJܲm.˜-H4yvͽFmJ$9m Znq'JN([.f_c%Щ8Kf}{P@+S'C Wn;`ѯ_ 0~}-`ޥ]z3C+Ww_ |0q^Yfoۈvzԯ^̩LxՏgWe~~U\3h;AJN0%jq0g^Ѳ&qUN@(7;&ZZԼEZ>E$r*h9x+s!φ `b&m[݊ 7t-_T0qv=^wވn}Dr:['[]{{yrD#һb_W?&B*WV|17ZZ&o4*z9JTcb*{9G#6)k+(^1fqYJbs_m|rIo5yw1,錦/: _w͠0 16li,3ցl_<~Z~I1~d4 qDߌDf$-1KLvg_x;czTʛoG<֠"v6 k;6߉qbQXdFRt3vo֟\wSY՟Wظq|8S(+*9feRlI,%p}D͙{Z}DHMyCSH\[FͅZ̠m**1Cҏ#?:E:&F'j5`53ǮeXS878"*S+Xۊ_PLs-K*A 3zWJjԛ&Q)c#notyi˶(f\kA}}ϲ*>dےF w:>-5vY/l)%WBj#.q{nؖ:Vh+.H v' 2{mҴ ųTfCo<'#YhgĜ♼ٰ~OĐ8&Db !,1d/)Ⱦ=&K(8s.=RUgU*)c*FUڐ`;@`GX Bw [[3]CK@/$>;tAKN Sq<RIXc! "#]0WY 혥Jw\۞C/yM y|~K,,;+0a`a!!}yj٦_ LTRL9b;|qblĽ0`afaIFk>" a%&fkl83Z7mѶn֑XAbTfyfXB՞6,v@;ػ! a橢&`9#Z{ZOZsNw(WDS y6ʅ7HN*X=Bn958:ar#'C朙W{ MU5"H6_E^3E5`!*huͦTֵkW2-cZsqXްA#g4xcͦW0.d`4F1}F{ R wP_o0j5Vfj\*ct:sګMmX)2x0{\(iNp5&ט5RՙV 7oPu۾vÅܵʡ5+T{u! 6}2]rZIF#N@鋡 ܼC줅G#6`)`\<]uk9HPQj/Zc1`LW̺/>ػ T\  Q>}!&.N75:*BPKif.55o~sڹ([#tץYB ݓYY)fD>KݟdMµtX9% )]) PJq0\b֤ ؃sTZGW6gkKKʠ-P)%YĖ F|0jm5#s^4c<_ЁR7⃃9pHxLy2˽\MeO!H`RWD׋xi&$* 04*酶gX;dk2M '-2u|& YRv6i5e7GW+o8Y[u!(;unX㜫 ),T~˞}\85<ʜSv*&x0HQ84 )Z6Q# K{s#Utjre\%]./9;(y~߇^ȩ)ӧAjG'8x6dVdK97Vav;ީoPv]]x҃6fwY)=J]* ؙT-*HQrK({8 6vfVG.y;PQTkdd##shݫOa{݀OiQ~8-iZ76(XlוhW[)NIcf( ť'xnl:9gc~ʐ@z{nIq&oZcjXFX[ omGke-^bw>NQ^uTͭuד;<J$<^!=ؽ rbh<9d"ӖE8$3j2)e5ޡLC2,0xW񓃕NsZ xe0XZNorV<6ó8#onȳQQh$O(5@w/ȉ9Cb Y=܏}-!ℑ >;DZIFFۓ8yr*ury{s?|燼av 7xvG`nȳ3x#  axp{AN 82GŜ=m`AG۵]wC4f֎̔-aצ[ANhS.ڄG^9/Zͣrx9*Wo &* v 3{/G;L?c slucMwCDSaNmNNs:xU<㼉#/M?y=Ӝ7+ 4UGOwi9poaԺYk~uFI⍝W {ҰOt@'ͽɭ_D E"cՅ|G6 ՟ ;Z2%׭T3njIgA RݴoBXO.M`%_W2 ;U%GfCS𙳏{oT}{wc^M>T?{+ߝLcdP>5v wMnN1w5O2?Q>u҇I5,: ZU8-m:`AFyOs\,c\8kS׎D@ॅ@K{>z9,Ყ& ^Hq!A*iM3看 )|A&#`v?#;. :v86olxxy~[@m7}G6~h+ l 3E [ݔ} ژWv%Bp/ ڍx3 u<&av n(L>u+qўHw*TѥNޫQ>A+VS*T tUN][;-%)^sJ!p#G|v?T]^' WzDُV<' T,*nlOźWWR{ L9d#K-mY_vfW!=2k#BԱ$#}դؔ٤D8%6{neQEPSkш7 RQxS%{0CY(P:( `% D3601RBBQR ҠuʗVj84 Hx΢q^Ƞ{Isi)9)2<<"ǫ*YDx>j)u#2^`Cb[@Y}u`.1Dcf ̚(=.wa\$HbeS!Pֻ^PKltVN>tLmIPXjz1Y`Sl>IfC&3Tjj Z>))Ae6P3SXs$Ɛ>6#h͐6c" !m!.g>B~u?Tl<41V[gF7DʽF[gt"J,ikY19YAE3hXKq@c9ߐ4( uͽe^*e+.y!e$pfTC)'_Yd;ɵp#9EHYiTPBJ`~4 lKmGpc vώ=,=wIf[kȖl47)P=f@RiE1cziv/4 f#/ JIyC!cʤt b-L(mtn!>3Hӳ O)֜wP&Oj Z5x4QEvdwL3έi(]7xTv}!AT: T"EQQn܃ U@Ғj55cVYav*:>_[i|0ů݌vaF3d{ɻpv9ZXQcP*Y^%P{-V* uCŻ - --\7F_Ӣh\'7MS&eeOìN0m_pJFXEH*P37Cl_Pݫ_zcPRs=o!zYܺa^<v~:qJT[mlXj:x[BmI96d[˶7l0v4\پkpvk6G(8aZfvl`p72 (;8ᵢ!jt(c<QI +3Oi9F*RSͶw4N ZΏ4(nq.(#WX|eG7j-M|!dBjckQZPϘwQ+Kt]Bw{φH@)e14Fgm6?d-8ʢ/SGp9Pϭ9q aRkte90_4!' 嗳W(X( rhJH$XCeƺlZe_BtIc1;F tw6^ }NvPDlu$QtmvG؆8, {J*,L)pR<@#ΝX3dS+R];`q~Vy;2ƉŚg̑F;C*9J=!Fq9/z(7vv T(q*dwA$o"CJظ`;s8DLݎn8OG #kZCqV]ewK* &jZ}M_9P~\ujta|+f0'2|u~53L4ڶ=Qlk폷@^rdDJkb|r3I#f S//FI";ÑI&Հ# BP&;}Az]k8МWyeLp^ۚMl0 Dh3?}ۓk/~oQt^6)_+|4=qtyTGGGt (Qg!Zw?1BkUF2zNp&Y.TTatâ–j>ݽ!<L֓TzJ[ORiI޼X))3 ph J81<+q R +~J)sĴnw^ͻ.>C\1[nDyk|':е:y|'x +OXRo߽Yh&no뫅x_jF5n<εb20NC cd!,qE$*yVk] 77OV"EUPQBE-JђdSmcOU`,RK)a{m:c_'bs`DVb q׿d_? 7݃' wI9@)\ǂcʈ%nOm3ø60i+ Fu*LM(C>FLtO‰$xmp HY"3N҃GiC!EF=)azSp^݁g4u_J4P^+þq eJn6>R*pM.hג:k%0(  lZohj Ԫd2 EY!PI %GUVPfE%1QY2TjKA6":M eV'|Lj1|f:FŕB%mdR>/E}/'\,x .ZVqA? %noAQz}BSծ 9JERZ \d:?l| _mN?:1t=O~~xwה1޲o֟V/.Ukg75KV\;d:[hcԙ_SZTw?߹Xǹ#>maoUi!4\JR.0:G޵~ R4eRJG3"5C,_%|#.[d@agGyBkK@~3_3^иs&=U5VQءjTHjZ۪:`>N@!TͶ *.@j,:Drn>X@(wV\ (w}kiܥ3rpDAN,zzxw-Kx19WUj `͆=< _}}0TCFN}v$BXۉY98ֱRF+t-M͐`mu%hT 6(g$7uP"k# .Зa'TGʃi`ֈZ Gh<'[2<"]e9BH˨ 9N#ep^k)QAi#5xf˔dUlKĖͷ)h\! Ivb4.Pg#bk-ʯ*!3a,NY)E5Hְ!X#0 /jHѵIg4XaIaC5i/ li5fwċI9ح[hgީT{g DJh:Eu<Ŷ(ϔu$Ɩ5lZ:t=*CF܄=yyƽ҅:IR!8$c2L}EZixi'fZZoM5&unnߍO4mx/܋yp)e%UTn, &Zq nly OP *;2.jU날J.KCpk֞WM 29y\C$')~ .wws2& TG}zvd~pܨ.hUY.doMȪT[RՔGjX\M FGNks=rWA+/U~fNVEKR Ndie^%5$Mq鴤C;׉㾟En{Dsr('^bTw6=fYfaI%eYBQe;>h;fY@7qO+JQ%ԩ@  靨ԶBCԠi4-.[ 4-ֵNeԜD!a. NɚZ! ?0{"p j{A{_e  մK7JG#b /gQ}YҔhf1o,n<5 mUlKy2 B?]0PDb$ VG\ X_54Ĝl!BWoWȴ/XrmaF{{x&Dw#qp+kaCٰ5JO.G/hWzaiрha \3Jzqrng<=?dnOی\_"OɈɓ9F(Q7r}}KHB0WYIbVGi]1ښRNճviyJN|\*WxR1~Z1'(aYndޕpB;n[ w`ʴQZ4JlaA(`n?~l쀳FդV;ۙ61r\OfL`W}PVl1 !FVD} ]'qLjh'\Se`3t9 5{xsw6#ȁMT{O[O[fVbl⸁zm$( } !}t˔p<H;EEc|}4;܀1M g*5"i]%R\D[Ps6[Sq_ޫ["ъw<B9@qqVAPjNvaq<$}L5 ؑ\ =B( }1{[;Jpw7TpR8^ }Δ0ͽa+J%gk^9#Y]ʬnZyfhYtk/D+{>4SEܕ؎ӁL^gtΙ7!*gJPJ+h]VO`\FڒaFN  ̷I4 " Y4 ˢ94zS $# n7GΦ]ҦO4Ў )+ m+@27Յ;d_6G^te8@{8kD3K{S&/1_l O9bqs-JX%>^2[uaޞ唥 9m:VQ1'Ltp̈́x8߅_ L7;"`w2:]9{{͗r}r9,'6 %{E&5ɿ/wէيٟɤ/٫OزrtMJͷ9_&-L3ӌwDm[zd\>yJEXN/vpl~'[ߴj-*LQZTSMpu$^1UD@hRaSSK_T_3}0 ~Vr38~?v|F X? $#ޙZ6}=CB cym;Ir'iQd+>`'U mm砚s"o8h7t7_.ܘASU6PtL{%0M-VíZ(@ZZ3lȇ1P&>AHQbQJJ0TJFbV|F"ܪ;i}ddZ`.@T#E5 fH2\H2X f E9E\ak#6hЋi)>edLwS&%jRQ%.莄$$;CYb(xHP|$ͥ'e!}YTCJP:]G3Y*%ټ{ H&ކzs S!YtD7t]hp"3'*|?|4gG3ڗC5|\)V<5w{u9=V>Smb2Y@5 q~l GYXX+~7$cp߆){Bbq\pyGw˳l3wO@_bp!Aƽݷ&]K@h:4ud)hټp;g[\Տ8`ꃫN JVtxp9uUG-)0CE8wjkW[F<zh$. -ST!PmQ:*POѸ ]:̏Y83?iH#SϦSqτ*HYiiI DC87C9*NO3{h;So_LjJgLOLOLOLo2]3݊`fZ-]:ZDoēUvtƘV imv{鋊K_12#MgaQ醿Rm`V]!5%,޼+؇ڕsdf,qa/)H#',erIqߓ\tyFy +FztFRd4T$&ZX gn=݇8ܻ>uÝw#U? ϞzM%8׏hA`o?`4)z;]c0Ł ;eeŶO-Y m~ &vi ]NodjèN;qfل9[aDRQMDkۨnN q)QlUvЊ=KvRmmM*:s":UIms;9pm/Bu..og?V 3՗_w3Zlxk QsjS(a Ii/I1yo\kUȋ/|T(ɛ %n|uMו64e*v=#Y3vus16ROy$ {wdiݺɩ-컓7vB:۸Dַ(wSYG'WN'Qu͸gEכTI{仳\={9wqJ {<1X-M޵a;/XĆ@6I{bi4o1ݷ=G%FrAbbU#kcT{E01[=Ḧ@9nTU m*V|<-l?iD珳(ӻE|{dn#rа"v!l28Pݓg=g7 ޞsw9}q=x`~;}G2җ[;_/݁_ug^qn?]U|xqN|;-b1^PwA§qUyK'ˡ?-V\gNWc[\ͻӳO?>ݏ~xLp1q YnsEts>`0qo;<[^-<*ޱ~;JqtPz6ů~po fg9x?粈i1y }q}Ϳt\~2_q<0e*[oKdN꯯OL淧?vx[G+noy?cX~ոd tyˣ{{y}=XPcPXO./?^tK*_\i_P4 >WEY|Edpu=vjps߯_/ )Dܜ x Jrb./c]Ϯ 1=8"1_GVWx)❓/ʓRZLP>_-'.2>Ϭ$.Ғv?giݩ|KufRc1zbPJ~X x|>m ?t}5QaMYV-cXGcj|cmjJi\:ħ5ZI=I"i$ӘL2PJeIaC On03rkQ.hQ8m3zN5re!pm^RצCN9޸ e>oKo@Ĭ@DM4իoJV)f5aQkNv\2wl1z7#_CJ=f@ p_#}}@{lO{?lJ{iDZw'I;j#M[op~w_jܥRG䖛LxD>τHsdQ.H)Oަ Ҏ3+d2ezT=0($ﻛ @v|-#>~,Z~ϵ" ~HcǭAŅlMܺ[o~#/unu>K~!n1Q!!1gPf1! B,9v!ĭ`LM:-n]N%.ۓF+)*Rp6fUB61 , (^`>&콬G|6U_S^0jWW^a' K`*S:$4(</5Pt :v!ǒxEر\~њZPIhMi5[DkXՇ&ZDkhMh! uJB|fsKR䳠 8CYK kQIS/ۛX .QAQ$l\"#6;(_\ػ&<9N)Sp ҍTDR58! $D9U%JM)Ǔ`]'AC3^ӫ%w>b4c\-X᧨'Y5J),US&ܻIߢ^I]exU+ǴV;;30AafTCwcIq76Nlo1}9uh$Tw8x6%5_~hWrkЏj#~x Q//& cr'`@()|޾( 4ػZ<| uMj߷Ր/#j%s;]q' ϴ`s"<ьJKʱRp Nd"8Fs;ʜfLÌLJKr-M,DJ;Yxi42&[#XZ|:xx ܀ aq / }^$n;iC^26*L%2E Y@ʂKX9)9NqD#uI\Xs!bF%Gz?1ٸou (BChAgZnBqQ|Ll#,yˬ@!H,SN ! ^b¹(14`E=H[3Qoc2Z!nO%T9"'B8 4Ԅ:9HlrU2U:  rPxD :8:F4(Hk@Fk<uijS$>wOl` dͮ3dĽ" |?'LIZW@D>ۓ"%ժqxb%= b u^=8@1qSNM}W\CNKwiI>09~3-G[Vysp̻&<)}ѵMF%*E[uy.?7 EJ o*/Ob! *s%lq+PÔOV,%y$٘(g ox:B9)]Hh pPV!GHS=Jd~2$;TS6i D]|v]3* zhN>jʰ`;ʼn+g #^7 <84!/B 3H2MۓaҨ&q&'PqZFp:D̂ApK"JFdO։oǏO|L[5$axǜE(I93Dw~':4/dh9/& On2j/dh3_AK轞$1@AI$Q)6o8Մ)vΕDpI^ɂ6a4uoR$D*BH!g`2YF"`f@axd> {-֒9\{AWJ(t05T [-PڸlR' z} 7yP1uK[,JhMq5ػd܄ޭT)S6ahޭ|Rmwa!_ٔB)GV*)}GwNtJƋ݊Mn5,+76QʷҎmSX",V|DZO(DBO~,cT@qDoפX 2dȰg{BM> j 9/ QPI] !IJq/ ʥば'`PL,'lJx`O5ꨐ8}I'trO 9(aꝍ'pq7ӯ q%lGCI(s-p+<ahljLc9۶rJPK>tvd ?|!LG~q"xI(avY)W ;^=$r{OeILeq"$f`>p N\g"p33l2KS}30&sKȀ[Oک S>[F037 '5L0˽RcoBi\j"ZGt e(zj]W?e)<[9VkJzH9gHV{Fs襒Ղʲ EĦiWFXxV0(` Y7N[et5c~H­˄-ť2H,}GNKt[rԩ-5,+7"sݩlvzRN;rtWi+ڻ~%%ZhI:&7I'1gs`kKɖYEO\BzWڱƮ$k+"$߳$m?=ɨQ7VШ;HfHpBte0!Ė%yxPoi4Q t 6ٌ&-UY"ogL֝+@y  44Rp *-3~Ke(|H_yp 1\(tPbwVj()O8żLd^.)r.>p⹽xb1Njh<ĽmC"Oj3u{;48}9p?kp>Ƥ1-X i^.Š/̂[ ,(]Pޡ,&74.kgC%&ªg~U=#FgWBȫDW 5JtʐJPu4NkV|!&3ldRDZrmbOgp^ 0\%UW VS*alȫJ8LwdhVazF; 7릅Y[ʹ-ܪh+fjWƃ(f2"i^pnĊRJ3hnlQFg ?Xaq_BIGE ,0#:.W߹E4,5WT tAG!M5X*ǵֱ}\`%g_'g~[k{{sq^~#=50Fh"eZ3TXi. J_zw/n7)I `90,jBA-Qt4q+0q_#4UKg'㦱X3#ޅЙ!)n(QB>8 $ ^7-^^siOx+")bϊR|d9 ' 0+EJbj%CQjZE\{\5>r>;(GG+gCI ]Zh"7R ŵDM@.D Efe#ա9u'gנXגeȬ+Ec&S;T"d#_E5?j8>ͯWOt1j:~k5wR`}ͼ)քiBۋue5ꀷ!& G~hM{i7U#_!@vj/ xsJH $7%n<}>`CZO! HZmA2AZ}uVҐocOp=mkUe)a^Pb9qu=SV3%l-9۪V[PA- JǂV$Uϵ=][{`Q[ Dͧv@^,.e`U)%:(yu[|[;e ;lmnψ\KIkg֏_ _9[$gʺ:QKf _Ξ&H>]ͧC ޡ5\]^ 7/C}lk}r9 k?ľxc+JdGlT@b]2F==H@f~%%+ ~WP UJ& ^TDHሔƖIE$2 3bZIsP9A9 Ri]v[~RosjVgs0w._ĩχwoޡ1 وaU`8ݔtFf~o]]n?ԼBcm\ xj~!ѪGڰO/:q3m/ [ kY/mu bwS&ycƿͷm100xMA.v44161 O; 0L15`44r>mhld Kb03Hl dz1bFR,Lƾά5x V CfRma ՋY0,HҼ I]}&1%e,0į&}\KTkrP8f"1H^¶rB`}5o3G;wY=GsvI!p>f>F K 1@{v7[JnTyΛw~|p ߍ+.7kU`Q|1.ۇ B?vPwZPO+Xg0w6Pq~Vnx,oF?QC@"Ҕ HBEsRH;&Q/!*U,rnޓ,F0hc.Ǩt}u ngmM@HT.2Fi&\y*v$TaA$pK'&hQ't_b1KHf:cMȚ &õWn(d?]M8[fE_EM[KRsw>s VbX66 tg mv/G4Z*>*ŸkRj x}'u 05O<BWc5Hc%jEE8 +FuEΧZ!yT7{>ZBZT7Pt6+Jĺ 󩦒314EU1Wab .L%aXiVS1S?dIJ.ӫhJni,']>lfJWL{|L(W^=G1f D"ZZZZ{HqطQR_+p\b)]GHR4ռ>cAB!Lj@nJ>:7])QI-Wj27bbW/(% xg~zTΨ",54w8`@BS# S*4)>L|Gv:85`o<'+?103)1$:[o2 OG91O饻ݕu,v%1M21:!r - Ud2ESC tHԨ]4.*!c$2ykePeV:BnBI9ĚNp?+,zc̩T,r8\V#Aq_(|t| 1n rcY,HlL1wYĥqA3;vGh_ {682ŔS0gFJi%81Xh0!ϭ 5- N p#ny3߯[bY mc\L&^g]1M\$KlR{U΢ڦ q ((:bZMdߑF`B\Jx iB406"z@)A:_̧k(/*]ة22ux!l- mSt}=/jhw~[-Nx:NO0`N *nsZ5KFEG_}۶DV&mzjFqfebӬ9sR,r BÃ`(bFq˭Baqv{]L1Hx0p[b"Cp%Xd + "#ñ$  ,uq>e{1%}"'# Q@QbQm5UZXJ 9CXS Aiu@FP˚5|b_X')Y+Ă5e堲=])c^]2?(YK΅;&k?uJv]+' ?Ks7h$ Bs-f^Nik <[=vyՒ,l׶Kߌ2h|6؉KZ-/4rPkhЪB&y_Ԧvn>C Zf*ΣiU?1Ơ!g 0m[tYWjtA@ t${nm#cZ:'I4O|*QT"c Y >΃paһ%OsR4-LF+ \=FS6ӑ!-&/JQx|A@>'kA0Ԑ@I ]A=)Lݟ(^헆; HJ?Rb%"՜P_қ]Ҫ`(/`5ʳGqZ?,/浘۬+_8[ƛ?> ?j;m=ck|z&VA12=C%Gii"'rӲ7!)mμik[ olv3wJmEkGCUtODESxe"R:V9bN4q2W<L#$A cb#MȊ1bֿOXùP@0"adAAa-J A@$5T(Ĭ#ԙXauLboiT 6..M8"idZ!ё ufL\R#BFaĎr7_B(<:,l$ELk^O=̚!EĮÔBJf#*0bkVMI2YQWWZfsD^z;!V.?=R GLo(7臔4#ꧬ5zVVד8SM1iQZ:r0d k1{wS؃%uS!8gӫ&sK[1)%R[&^֡6W/UW[sJwjrMR:[,TSw)6(Esp$,/B4.V—Zg*L2Qy2{! 6]hL$~~vOS*;ɸ-Q_(`g>aOr4 \I;QMԺ^+N aQ$vɂ>] *Aŭ=s:GZiIY[C3pJa^h *O[jF?kکw6 > 8U27Z~*VW|6>;v.>]S'X {E|,is:h)H"-Ѭaŀ;9wl1Wzkyx/6kȗ50)r1IC!ŊvW+>y40.ZNB]ϳ„?I5?أS1b>(sQ!D:[Ab6bDLhWc}gmu~BwZfY}*vPclшu} j /|:Yy횻2e:,,vӑ7KM w6oiσ(b,$IjIv6)g.t@qs&b$jT3^A<Ͷ3˥OOH&f< #d |9⌰Nٻ%W=mp™U_ !9&P$5Cn#Jۋ߷HMxkjv"_12"LTMltI6d|U9U/vx.f+Wt?=`J$Φ[=iٌ`Tv|@)aN-{qh,w'E!*]ӽuNsn5tu0y*$IT`bRY&=^v۰m"°;RR" 3{TJqj7j>y,6 jqثH:.+vdp$95Mj/Ѓ)XkQMqtO cv6OVqt;J2,ݩNg+¨F`Թ>l<)QXQwv,O5Z1nGQ 1"(\^%RU߮o,̕NRV(0HB#>O )bA֫FE=V=oۜ~i#+d從Xi@=M7'3Ywc9\i\I ӱWfKoW>ۦ:څL<s"}|t`Nv8XRu&%\:pYxm#q*.%vZ؞9پ;8@sRj ZRJ;V%jCU#""WF۬AdKONY5H.<%}ȱ\9l01|bΦɏ J⨒T@ 4Э!&K0X fN8޵!K. pM3bas3l?]ɅY2m(UZۙɽF)@աݙ eGlKÝ/tP9>9cGͫ)?e=c&):gALYT9[d{b&*8|OJ L^! kER+$EΞ "?LOxk&-ď ")awBJIUSO(.6Cg^JnyRŌkRl ќhfЀWhpjNV>^OHuJ8*0sEu9F~瓷+tڂ~%sA0mRYE+b;37(L^\~1)eSbg=y1 KZb+\0.^Z\s[$RFI[ll4ig1(}G!A B,$R0B3y|Fdl͹mw68򮎉~<Ɠl4e'RmMp Ӭ_F5aKe3)ʡXrƂ B2`ݞJ&>rƿ{3rA%spvgBիqX}uzBю Tߧj`Jb@WaU?K (pJ:ő|Gtנˡs2'td |ekkO7-ԝנam>>XHp(V>gKGMIf*R?CqnuH5@YZX[x4yB3YP!Q|-6]q<ݍsR`z0Gx;67:Rz(Խ9a-Vdϻ:[tGy*xhaN׏wKwR]ꋄ+w51];Tõ̱uLZ0M8(N(I_:XY/]F.~oTc 'z-f_=גA)=|Zj\%k ( x/W"}dUL@ Q{0ꌭ)=h_;`X"&9YL2&'ۦІx!Uwdn`pb3{"GnzD;A;^ u=xvφQn]$޸UvIA.6#ɷCCvzlFyt2p:4g@ QN4ahVԱ z {cɧۼnvm,W-Rdsk;?{ɛS̙.,v.@sj!\1s?7}+:K#<ނv.҇"=}>x-тw1O b44]2i y!~: sWmpɼizsfr3``H9@tc8+%o&,kO)ZbH8u걠ʑճBA/%TH~~t?T΃ ( p V0 QҾ?uLx-9w*,6 R9 81 WQ'y#8 n;cIe2x]u $WvC(}sNLm!wQےHa Q@fGbBP۵:`I{ͯhِ9i/)aנ[b\:; 8Ҽ+]F "S1] !m6?gӛ`/ :Ex=4v^&cf'gqrW7|Xόo,+o=J0f }w6cX|@!K !Kq6:~{ T%13烥יHT`w6u&)۷xOLmjvsn^4Ǥ= ! Ba8 q>KdϗmIP`% "e b{RX+4 LVXVP)[IFD";'G{Ooǎ2&S(zZ%+?3e8׫x0A[jR[*m<_«VZzE,nڂsDmek}%07O%DE\C8Xe猦 ,U1  ;;J.qø3y?Liu8HsgOqn fwAЎy }eO"/W62uGfoŁpVe4BۆPOةtd-Aʳ)}j UQy6d l.}o`6^ )Yd~nE@6N˂ˠMy- f-7_V+r?4P]k8ϳaX+mAzLҥx="I` ^RjW?Q=%׷*7o=SJ@L }z<]f$wiI0|J`,˜W˱[/O@<q*أ}yB攟u##W^9G?1E/eJa=S [Md{_d#%f^(aKL#^()yL#\2 @gU\)9fOMEck}Q "*ǽx{y|rfc[Ѱ?EEfܯM[/PJ7.zcJ1Iep0mI؜tưDh4]1)z$2Tȑ$/."/]rCH6K" x]Ji bsIt'dc?%Ӂ$]nM{ ӓLMGB#?]2OJn,K¥ I?dW FZ7sͲB vF Ҋ%b ‹^ JX БBeH\Yq*4sqd{]B]ˋ[ۥxá%V%P0ZN%&pkPi//չI8;hO%ޅSR Ձ  t/eS`c,qrŽ B;x&=g4iN.37m?x˧whޯCHSO⍲MSjѴGa$MؓdS-7ȟi~֋|KnC k0z6l:.Pyˆa ua5B>chVŔyoyr%9^ HIi;z2,Ln#5VR82ޞE:j@:XGtjOպWVi+ӫAUk'閽};|U<}<5p^Ր8dPEG8o|Wc?a*9cƑDai5B8`H!!QOF@TADF0 9~1l,QF#yNFm)f.M2mLnG)DUm 3  .D0B`ǒw b d=č2W,3o;"da]$,uNmb\XYު/[>|cH/)OYH=<(m$G@+Ykm.Ѥ7obͣr7:LhotSЏ,wҜ^ &Wz%#ymAq]/jЯMNkSoyYYQ:Œ ӸqMɞ4Uqf +9*6zv]/+O)Ý |~=" sJ9>Z-&~`f᝿$I9J2zU^fv.gZ}AhV'}J \ 1Zh/ǻieL^I D;iyitilX .#1^ łh?ni\&5/U#DG>QkN6y]k$}#2悏HaJONCFXiJ:REvpJFd̆R~_ˊ/L#*+orJ fԕ>b<6эhtoFAr^]l6Lϲd7_'1;׫(4쐮*i]œm,y X 29纕fYCD95?w~twmVC§2&2TvTP4I7 ƠJMO֌m)5BS?kN>ӠJɝ }r6Z?sg8i1Stvv<3Χy峜k% cw)oATZIۍZԃޜ >q( *9 l6jv>ի6e;yq][6+,dSjpjrύM[5pFlhպnYߺ=*sE5 =%cHE}{U΋읞WTpAqY۱Y&4:W(q8aput|#^2%uVgzfܵiĘEd>Glj2:0r6F̟o b^E>[Fq5Ջ+#̋?e0eh+6{=;eO3&MA7H_8 )4H\?d ͞l*YYDCC3O9ۜ`(!7zx}.QZhr8wa|JrJ/XQS#Zf=\iu /3kT;@{!'TP,ŧlf,}]u;)2buT'! ֡ L: 1H}@OZI8vC@C#Q#㙠"$l!m$-UB32-!h,!Ih#iww{6 PZK(Ɛj"@HS՞_ r%9Njӏsr `k h*c5,V3B2R "vO4 ȴg4\? `>|I7|Z-Ͳ&1&ZA, C,֜#ck"دjdHQJB)!@K1C]|nJ@N">fp%u "@46r80 " ~K\(H$ -j1="kPĉtEgfRwtbjуr7%/wɐ1dCqcWinfq4m*A }2 4;;ƑbZdBʓLRHbX2+8*1$ID FFFӘ(ηL08 p-p[ p^).nh.y{0Z诞kY^&+ǹ̶o\;>&la A>C6le:M8xI4 V)1\nݟ>. Nu?p0ӍnX )]o6 <x$ݠ3 Ƃ޳B!=,N%M0" :w<LHʼnoR@ $/iln߄;͟Cί3k-c| VB"PB[h%15abqM#EQ|x [ѬMR{3ֈ wR[2l / Cm s,/Cg8[**qh! E 9``>K"i4#m4f F^uG I c/%nUb/OH,={ yS֜8V{ ?~ԶUQu2T>ES/ZŘ;bņs¸.W0UX #5oBLd~ۢgEqG/#wE:SM+efi> p.hO+c5Y[*p슶Ne7?H %l~Bš*?UOx{_4v&B>-Nڧ|A,N-.d*"}ڽuwGrjdXME&zP\RV*عݰ3þ{C{-qf^)@@-rغ%TN/"UE:t"A.%riVaa`%pQ,sp4ҎD^v0O+ҝ]uQ,FC"Hs )k`Dk5_2ww ʒe,KrA:W/)|ݧm2ܸ4LU@t EqT@Ek|$]BlNipڳ$#́rOSM3J*JxDقI ^$Dǻe'iI0Iƚo4xSp d.~F{}) +wouýYbO2(r pE˩h)6 RSaj*UČ:c))fMJw?ԘYisF 0EsdaƑܺJxN+kisp"MIX^7Xv*m=ڎMVdHe$َx? ˸p~y6,(-iXjCfW7"m?{tysQoy22)$f|˭wf,ymӴvơ 䘖#H LyYS{\m9+FOkpjtEt=bxQtBmL`RU~NѦЕY+" Ģ n"ݨ02ޢp[G-~M v\Dk;}ް~UeXHc *w(jTnw9n.4mhfz[+wpr"e|5>y! _՗kpF`' *y1%ʸw[%gT^ɋU72Q* p!Deuv{ avk4h>&ǘ;?~ɚbF$Ef7Sv[hg \r3W:ޟZ{Gt,8}^e~DQ-8id Dr o3qY5@y/7?6Y#,8~[47nb &v8~zߏI]p$"c20cPQ j&.L)B;Ȫ`DcHOB cEXqbQ繞̥R:w^$w+A:Kn[%I dIJ0Zr!?#8jė=C)Tᖠlב0#.8? eK\N% /F}[ :ߘ$G.l0Cr^>8Zv}ew^-C< 5L[ŨwiaБx8\B.z%N vB u9 |sf!s@ a j6p B=^6:@A͗FۄT=ѯ]}0S+ZL 6Lᑗc/8,6ȼv~ 9z #*ft}*_W_Cͱt*5paANrMudvlU2[qb߈x! HwwP a 3M=f46D7'p~廍]Y\Ze}W^85Zݙ S\Z7͞|r\ECR򬜜6'7N!{p|! ^ůxJ -H`ISN/ň$%|U&ې4'%w7'a ƞ K˳+{+({+|?g><KƖeSz# ˯m.XҤRk.!#WW !ѥSr@3O||y W{O@Bbz`y@K>ͷ0 WpDxP Hxə,e4 CYƵn4qB}+>~A09q99QhM#=['s[X]*8UΏ֒!esv^FsJWM b+iH )T 1 4BHH"żJĀ*RvsՙsZH<3/ȽaTpo8RhXJE:<ǐJyK36)(cfm '=M{8Rw-7D'&]r3'V~fBO HP4xql+4BhG/5{\\#m93QcȹUM4$Qya8(Ƚ`lbQAHS0D9Ja'F#I4#ihwua(18E V^&k dJA V=%խܙKa, U! D ̬"#̖>p (la# G#F!%tY֚[,w87Vax\󋿘n{e̚/0˱ bW]ZV{*=J҂Q^C%sBؕd5^yJ$=KK0klJ&z2, vUbe?bT$Baxă{$I7Z&ڒLvk,[͕z`},l ດΩ#Lbo2cf :/wsU p2qSIlT{;u;|Ptx>U)x*)?c=ᶌ/k\5RnICf~úQ[.RT'w&mC$YlBS[U4G}lú1V7$vAƘ9F#6\,dZ;_ZАgY:U 5n}ZР͆L* Y0kEѬ4%C)_Y_@&Ŋb#k,ƈ7@1$K V%{ .xUF US@9CNez8G=j ?)/7f(2Aʶ;Fd8~eڣߛ֟ Аg9:=lٰnQDry:c4nAQ)G7MhukCCf6[gJ6s[{'ӓؓlǎ1l4}nhlݎ1JfNiwۜ|sԕXnI ޲!JfOgyNw1/$Q=${`НXe51{ y*S?{f_41v$\}먘Tj4A` [N)]F9SwΧ.vkIV{̻{1_#N/K/;kV{|?_ĿeHV3>O5ȧrz D$BRZ]dL-%X^{!rQN_OeʗZRKc, `JYJd 1$K97ĭs3ry?z〚OA+g舿8$1 stV,~pO;8+J_wыOMff{zɛMh35Vo0O1}\^b">7ޜ/İ;3瓥̼]6‡A1k;ws,ݱ{1nΫHiil$[YhT yE TLJ¨)&K͝btד_]/W:̴<O?aÉ#o_ ǽ{qcs}1W5s'ffNW0=8wG"M\4ن@CZϿ׻%O.bXśW1WWg*~0"E6ŒK^b靈-bѭ4?=Z.>? C8S(=r r4%K$iE$XJ5^!IƋ"rCАBie-$)r?kHz&aHx+-H mƀ"rtlVK-(#(nu(58jU$zզHUC*mn!v67zhD$ )v 1' R(2 L~X lTR=fS3(h ڣf8_ICcLkW4D|ref%X3Cx|0ɯ󵉙|MyAùw*ԟbaL:fJ>jgxf>t˼`W()ua$4b-&yoK]u2@ue+$j@ a?˫MQĴc/CXմG$mH?u8jPE՞Eihّ4,`dO}%RI@'AU!N}/wT+*Z׽㷞xWInŒ.~HATyOݬlT7yWS7Emsܲuu2,Ƅ3>W'.,KLԅLyY(Hf( nzz5Q *JPħDԞ GR5pGXvCM Z2xND" a[Yݍ$l;c\&EWl]KnڧQZc0h%BS.Jr(Vx5Ev[9` ?ʭ@0F(qWl8?̓ 4 i=6`L Xwٖ"ni۲nQpVbc(Ȑ\<waSJP t<ޱsUQ|ǽ3Y)O4'qׇU_?4 P;݅Po=SD|m3c˔Qs;*{*)>w- G;?3F㵰KpĨ;/E w0ELV{0{8B$Y.0'7Ν77?|F7ߢ8\J+=̾LU#b09u4 O&#i EGES{fp30u-K8s1Mx"@ۅFu1MGKTP%EH)›V=Zt)yCs!kÃfۻuHIug.݉x"sm]մ;m}K/5nQg3<5ykſy:0O%fdLYFG8e{xjJHkYE}jzLi (鱪Y]r%=? 6ܐL?:֒C~ Lc~܂k}Xb`U1l]z8fx3,⋿ ojmƏdniqx|?ϧŸ'`DtlOέџقFamD[{Wt\s|BU*`"%{"x"J!$TOqy]1=aҮu*LŒ'>,B^'ɽ/OȅoE3lL~_<]i.}t VA8I(MN}r*JY=8WL֛3J N~2'2bw-@4%Lh0*>W(wRjrB ީ,5)XQ\*H0~y \v()aERsh̤)t1eъE!`m@^YIsz$mKbDZ|) ĈǡOqE+bx ZOb`U_hK-2Ư=Iy- 6 uRPkH[@1N"1{o$G`HҗO 4Q=k4g꜋! TvZ~>sbs: Naꐝ*u-cly]-7{-,ṔùZ A@u(~P!C ar,l`=q ԰'n$:b^E7S&MLHrBT4T8m T{N81\[Qpa:(BL&Niw&);:) m>$!p_m@̉?R× y_tEk-$O&?>qlV#?~t`bwW!sQ/\I< {3Rt՟A H_nS8WZp[t( `.K}LUE6ju+u;VSP#~QkCOW%-1Z,KK}P,SS){,`$M+A )#QW0VSsiV׋#Y!T"Z` t?}-~|pGKg<3r$޸\Mp!VFoG*6M@ݞՑ؝]z\@%cw1yBY(0 MM\YHOR)HlEpYoj4uDwhqQcC8(c Z'qVLѡBNb(yvedwۤ0; ̛pw?ɬb絇V`!Rpi} f+~ʭ*?^./1FsP?^|aćCf9 ,bIdyfoW @T5ٳ^P!=ǚ@)儊II2+QxϤռ׮|Kl"|!el12`1hf=L 03퍂4ziT.!hi JSk&+;|n^k/F]BhXJׯ[plT2ldJf0z .ff ֨L²AA!+Z'اHe)>`'Tw]sM/DBg\ OgZ?p.u/,4G=1.^}w4?8[%JAwM;F l~K,/?gEȞ1rٝne?>?w{4 ldk{ YVO ƓWm4vKU0 \σa^U?;q R{lpHnxwVO68%=uheoSiaG-*\JtX /gk2 Ϫ٨ekU*xx3Zҭzg'лS)Όy]ëq>>'4? )5Gq;>; +#hkQ=n֫_ BÌo~vyS8oDi#ӷ8Ok\bZ٤@ 4Eɋc `/nLMZȮ{j#8I.Uh,)4W" +MDNsL8`m@0bh΃!W"JR_bʍeY:[Mr~*O˫OUKa:>Eh] ~2ԲRxSu?>-m ZG>ė0+f6?IRO3Q1-C )̭4JgwWAW,A4V Y݄yk-I2EԞGi~:~0&+ p≙7L@%-L+;ʲNJȈY/%=%1&A~zi56/Q'ngPV*!P*3r/4Muř9sOe"Tcr0ԝ#*g5}Qఴ!O]~jj!7_oFCԆ &vM%U _:7T=]&Ȋ1. /ޥPeʃϝW4hEȱ9L@X׫2)~>Q?r b.'<2lxF(F(jXD-K½!&kFXu% X "İ('1NG fRC}=Ogka`6ko<]ёx[H#Wjˆlv43MLV$K6h^zƻ{ TG3KQix9kaR&F7g\ s7p}wwo]XF!W MJ-ڑzxyA.թ}dnP㏳?Fsf^,Okh֍n]r>x)&{ s54L٤9]YzMDW<]YBt,ύ}A6h>+A6h8x|6!hdC-ZXbGгEk:RHhp#h#1+RIr"Q˜T,dC82!;OZTJLE;A^8B]^TzM'ɇxsk'N,ClyJh9eq܄Wxc칊S'oC.8z)Sq~WDvdMR1&c'ɤjzHCުD D8,P(|1!*1TۓdC=FN>Bʥ$#m#TdGS䄣699>B5VvG8I^9:曪s7ߜ$A*oBKhQ9KM4 @g];+p(0XsBjB-Hww5 cJ+4 uH4 VɰZI0&~WJltZӰQp(T}18 Kɉ_-k zoc8o=;Q'.⍟ g?yC9$BsI]}(;ovW`oro;lFy{6/Zzx.~Κ3{ܭB5~K?/r=K9ݻE{]?'U"PFI9@68 3KTlO]9eKޣ[I+<lj7\nsm$$\Hyf^&sՔgWMs# id \b٤>yt*c?כ9^9MX>UIgs<g)gusaeqz:Ѧ\yUݎ\x iȮvQ۾S~*"~?gHY8+Atx|}w @cR-~g&v⃱F{0s|77Oec`vgxnɅ.LO YV'.gWn>W?yˋ L K\v3̍m6wcPl̻bq8 |>_c/FQ)ċr6<; {W+Z%P1dRa.Q3dnj䛗o%36I႔&V4B9#ƛcdr}WRȫ튍& )>BPH|.`2G367F k W|q6 QzSJAbd/ H rw].]0V&w*c%e O+y>R⯺j(fj15C6a.U)sjp4AW2Lf(W+zxqʱB+P ?6BO $'5"l>Y `d&cߨz5"=Bf$qZ?>91̹au sa !Òy;rjiF,'!Ubdry3Zez:hLjCe 1{eh E#v^[SKo'7 gh<9wf:XAۨfjpL/ˋr9d?^1|?^c$ y ~j` e,7䒒> -.e ܟ_d?Y$,..JF0P t|0Nb%!d8?< +7j2sSKؐ|Dtf!ӯ GJxd`e-ՁHUB_ 8 1V)CfӤPe!7S)sJK^9'\yƬ2F,ǜ@9z$j %z]+O_)T{!Lt) J*X]z ƛή(m.Kq_D+诰CcohJVDiDZH׼&kskZb,H)<9֢4#H"BRMŖ!lx 6#\\8+r09) *6Jc-C4HO1W{K4H;zJ30`0< RJC.VF|#$&NT8ĩQ\,Mscs!*/J6)_|kIŒIRH|ZlUS@K6IR}/ٞWP'9B&`ʱYKR;˜247V:p(Lz/(2ǭ%;{ ,'GaqF's%;Dj3 N "\iN`V ^B)P*u?<Ɇ5JE.z뿿dbwj`FU]Av᷄J.vUIyn^3Ld)?  {KLabY|P##\jZAIMMYy`$Gk(>w^Ѡ]na#w| C`1a.\ޑEkj[IF<]a- [X$p@z->6[( H[$Uz*D^[-jA3pk%9&GxtDp\$@2TD^sL 9^oZ[.ҘOr=Y( ɇl4^J~`zgqi*y21Ί3g>Ɠd bF_^-&q"Eϣ@ngWh6M9-|Ot^d3WbnGtXjB{*KT.jҐo\EtJnenTQ6n7#٦[U[hNRal[7Z:ݥuKĠ$qul]*rJe{-yuACqMҩQun[7!{aR1:mbݺ[Yfw@+Z:4WRn-oY7NKZT NjXcS۳n [hNQ=mVޙкbPEuR8ĺuo9-nں% Zպա!߸&@زn^*bP)/{WƑ Ϥzȵt`2~B2TTe5&iaKw:kUwlhJn3:Ҟi YOQy5CaUŠ T}&mu`@bEծ[剖nup3gJnRV2ShՋ&]۵v<Ѳڭy,Zf_@IB!haD\ќVye[YO)rl0j.x[ֵ* \c빅QkLVIT[7OEmeő>`ԲA?|餯cF5k::%A7=6`m1ۚfID-0jޠ<- օC4Ƕ'ܭnQ8b96\$mcG{1$jsmVIr m9䘁;cscS0r3MscU{9fjsmVI(K`}As1LsD6)R19:%^y9fJu{>WcWǗc6k!嘩19Z%A+v|9fUkF>3xR19Z%m۳3|scUFǗc76:396\$PI/,$m1) B#t|9f9gcXB5|sq3̮$ =h|1S<$A!Ye !TF 5Q1ӗp~<ܘɠoO>LNN>4 il  p< 2Ɍ͘83رiĸ{@Y<`< 93Sԁ&cV ,X-+C@ۜ]?ݿ+AX8h!Nc|Hx!AANEcU9 =2`* dQ% ˅LDJ#X? +o&c-}XD`Y gxJBHR`TLJXFÓ h)q 0A)AdE71[ca '02ҁkvh*UL)."Pb!Tâ"pE+̄W\94G?#Bk*+BAxeGcY_f؀|7`3 NDY$TdI(3 Lfe VsKiGe4i|d^KJs=kUUt~k!JY%F`A! qlrS%w!pU9*!G= [X`,Uy΁ۂ(0dXbX zXχh6tU m-%#^h8L ^BcȹO:i/ӱq6|Sja!gpdp^i2 g7Qs\i48Qa)ؙ)|ߙٿߺ\C7>tsf|_c"*~T?f;M:pI\r*F. fpwNÍx$!c-C/|Q緟~+%҈Hat]O%'0FZRaȷKڌ'%,V]DXpO $.JzBЖHNeqg?vCFo@'YANƅqG_v0\8>Wyӹ8t5ASӏ^43#ʈsD~1W9/K)M$Ec<Ҡce178ZH81\[5|oiR5"@qo+^h3U8N+0k 0}X}8A-R=ԕg [C(^QΦA%IeU_gbގ>qfرau~ s,@Sg4 ۡDP21蠫07`i؞nW0<9L)v5"Af9X_'0'{75 I\\6_z ,):) #e_t*AN ^ݐ'^Oaz%q9 =hh)?eV3~03a"_M͢G!XKPliؙ@V_&$^Ϧa<乗%9.~4^`v?d;'`WV& wD:[uT\?Qz)$/,ڇ!+QZ^8WмDj A_F>L~4$<~c/: 7} *YRo#J#zB`VYcPD˥n 2[bv[OV$+y^%Ϻsgh"[ _9fCdM =,DND;#`dTFJ` jJab8_K+4( &AWPv(M 1PJYh0 輫45◳V_ k aR%JLN]r9,qggVP2f%%diO>N@("e p.S^Lq׳+34߂5 1N,e$*&T3+/iWj^:$:|՛wo,dIrx{-{CBq¾IrDIm&,EՔ[% )ŽB0K*5T'J>Ù?)[UW 3U c qKRǣt$-&|zS>FAХưL`20pIKΌfEg}I`-eF[RHoAU B0])Z>%CjqzU3il LXYA",Vk 2-,taKv=(:?U5QJr$|yd@9Yg[}( g=CVʗ )خC_/(o8|k$FwZPuVe$3#"8OBsF&alHu6[&{%W :ʵ=zBagVRhl8+^Ш0Ö4׆iF2IYy x8<ʲ rPe̫0da_C1Rh⪗1,i n]|{7VIJ$ڛLL Κ R:Nِ }s BpFq$&_ PDeWed,=.4l4;΢EXXHmV2na|RynV7RE m0[eWD4]Bz d^.vt)mUTqXC\#6z:@P⪹ P{iשbu*ucN$,{5n>J7ίnQS%!joOjtج=i3p(ŕٱB)طPj/x󲜿yę`\o#yg))<hFHKVTr9ܝM3rLV"gWirR]ŋaD.'JneLDgBiUx@WWSS_)kwcØR7JPTo׳z6Tß}׀_?7*_dqrҾTC3f*ѝɰ9{eV)dj1)PDɄ$>c0Ot[ae6ZBuh.l T7mw7t.ztEGmJxG1G=b24` *xu*pn/y9p)NI”j{xCH*o745$b$x1 ǽB71J._waoG2 ,Yq~|k%[Wf(`%6.AeLsZqt9Zݔ/˟u_gzܺ_ep18mՀ2ӽ` w[$b-dJ}f/Bz#ٛ'~EXo]+s%N"xmPt%*>*p*-JFI#̑BPi`Eh)C9؍oq u aQ?9%$ \կQQoW|ʀI6!9ת~kE'3}7PH\"8KgeOLBڹh *y@& D)0` oH|nzʲo >,,< %g3;Kmi)A""BtZBı5IRXsaK,dLQ`e<9{w_altqA%ZHTAz< = ^R况؇]5^ɸxWyegNz_2x/:Xr~9M\dUQtKjHKMZ"@ M%'U&Q1YsJn=Rm\bn8 b|P}K7\8'+w[ &e?S( g_Դ XBPO'PHsMI֫ۘ^Kr*4ٳwڐ!=^@٩PZQh̀)P/䍲Ř/Ӊ:IEWTJ=Uba܊g5k9(u%Cft(+j  7yRYO4 Z=hx>&%\ } zؕm-EvgKW%2fԲiY2ԏua\ ]O P2rZP.PBoP@ tόp`5'h!SFS=oVqLo" ^;ë] W:[DO r4]Iss+ պ<κv+o b֖(u  *u +KXoJJfޫzLZs-36s5]nODVhʌ<7bg%&at^5Z>UaNrF ff m'dQJIϗ+%7\X%ŋ˚*mqB"ˍMr;87_4nPdhM-phhEݠZmǍ`|bjM хKFQ?68rOmjjL=-KFa'Uv)'`cѿ/0&:J}ʥAe?rd4򤙔o+8Cq~*k﫺~nq9;@RA(e) r1y)rX5F$Lͧ%ɟ'geMҳiv܏q>ᄣxDS&q9,sNP:k~Y\OnBAx-=*?<ããt0 $jPRC 11#3>ilD 僌\JЂɐ̉bh;Q+h; Ou++-)?(uAEawS!rPic?Cs6 vO![kk'=i]̍.R1Kz6.fS;_B<Ωjܬ;|k7 viv@u6r\eq:mu)y^m]$'-ÀrQwE]nގbfkoX,ÙVWG@\:A-c~ $ClA :PXŻſ=2sSjJm[vr͖k(h3`ek'@MT9 8YIx2;wܫj%3 J RR<_c3N ~ҽ^lP2J>sf5{V7lQ[@nG'#QaK@(GY9E4ۣ99mTϼ  >Bk֔n;EL^e1bzU'R%k' .(d$ #'E%!ʨ ɓ$ jQeRd v,FPȀ$8TNZVR^d!&'X. ȵŸ#rξ =G$?'U&qsqHR݁"!kiME; ] V82]L~9H|~zL#:e?6ޗ tbrh$ 7w{dx`&HIV"Ԏj+ͅerTc=igpP2EшkwVxT;2xB}aQDJ@Pdz3zg?@%SF6"8,0IDcLcs-f5>FGHxiȎjjřP խ'UJ|6 2+Ɋ(*_?r6ŗx_ ] >$<>8k>%`aƼ~}fЀK/ %8ϛۈ }pCSɪzg?#g8bDwrQ*TPD'%!0d*G3*XS^L>ɥ\*ЂzOQ/c>v˘4TwTi$L!9\nb F4;0C] Ap룩M1p!CЋ& (P,f9Mfp8k5Cmc3Tr( %ev)d,SJ-HKΫ+Cʐ C l"S7PsBp|ɔ0 [&j3MoC,xI$U}[8VI\0zKN K"{L#Bb.nmOqDVz§,Ant>}ZM}v4,5Gv=hunNvNDŽ>,̏7צ>h1]=A{m]ST#z87[f; ِ74ԕ &9 LW; 9wa@-hu@G\>XӜjdLwsw 6y@*ٱWk43оfpnP 1v'kB7j{kcTzd BHց1}o A,I5 IvEpzm̨;1iØ>+%#rm`d0ĸs8fuPS:K!C⧀7łZhg]PҨa4tfEo<7%Ig,Ƃ3NC!(;._;i gkzp-~q6CJ7wĿꝴӲ/Մ>* Lr6,sNױ6כ&|M: [;eCSzDy^*ܖ땕XWLk/uϺ5P'Q9Lzh$eP104, l"pJD[#V3~[[Cz9X2M j0Wh' Ii{NqϖCnF65\,[c6mɿ"\}?΍&ذ&0XJeI4~KJ^6i+ ""wg33Je+RE 3q]&QhЂ55mF@UgIflr1ϋA)x " DZ0֢BrǕXh(XoY@W>dS7 BKRi $/2o![0XQ4i@5~gx QL7bg_F(F`V,4,:$\")A0S^)[/Ėߩlk+VrKvڐTDE܌80IUvzNn6$8wi_T83.7O8g<_%3 DTWA#y@PlÏA?s%GM:wrQI &yD/})C8@8d0<"IXh5}%An(ky9 J LZx[w"VOS@oSa8|WNUM&:=wPieQHQ$Pd#EDGj$ 1 Ƒ aZ@iXJ/m(I,sc:`66 4HoeI&ndHs+VR;* ݓ l!D3pŁuJelvFM;15C-L1Ҳ>pS+"A)&qI[s.g_vз-aN웁o'tzxy掞ip^h׌(ׅW^tfgy;#mLh1,5'"wμO:XҔSqm"gG1l2L  bXG*" .,"/@'裶YruLkeU[yi6$ O- \DHj^sdg'H(GO$ lJ2grs";^_B9u_]EVl`o4 P@4Ɔ^"p6 \'c3i5 C&7zdܙ>@`2xs B#0nLt]<>1`5ӛC&>hh$T8CIb—!($"O48Ծ }g#b2%$ {#9ǰǕ~_-NY4\ptUyHk jQ;qy0t,Ɛ+#߸d_ߝ=+^Iʮn-T+Ȟ1"=>}0 ::aJv{f1='4_ A͹g:h0^N󶹪9oT UM{w@"m?6MCq7we:Oy:/gO]ӋNMyq̤%0v>_tt&kM :5il`dkNO\Oю܍[w~ҙ|;5]uG͛ew< H} &d˞߽2n~RsYM qwL=ڠʲI{z^`٧gW{#{y4b&JElZr.hNl qTތ{ջw'?qE =ߤA8й N ?m+ >r,ufCjcϧ'/3_;4=:bQ޴+W<1 ^#ŻI.2fh]v?@IAV[?~x{VVxν sS|ngTͩ`A^w^o &1V,,w.dvZ=w"usy͛ɨ,ˏ&JMxhIrz?H~BBûݷ`~"{Objyv~pJ=( ^*?~/1ik?WCU]G`fbꙤ(OUؑ-,[mp۶ ܥ,yL>eEKgi' /l/bG{82Cd8i%z[1qۛ[ æV*;w &np=><dhV$muNEWGoώN{rF69?~|z_޽ʚ[- ޥ5L'I t+a26^ȆqztZLhA9y`vӸ Ϳ6qhBqzgoLk$&H)M}ER 0xAhMGƎJ 1Ld+7߿ {,ͬy5OI>e=5[4 ֖YkY2`c`ASG.YAa-Brض 67"A"heQ($ BCI"##HZ`42^sEwipb:-_HBځSxG&zt$!3 CH1)Zt=+-]hN0#a֢E =:red a`ΖTnj-nW 6NuPr:7aѥ8aL8)[X Ȅ"iP+ iw$&1CGaҭ}pvG_k)~R ~tNvI y Jhq0[(r#`IqmDYymd`&9eKub9 nmc͹yl Uh!ʙC6A`ttAF-[JkxL4` P7*2SNN e,3 =5CJbĨLy){*Sf3Ms~axk,Ƃetz٣N3#Tժ{@UJrXZa[ZIЍɘ!pMB 2Z=d5ܢT%T]".D#T 6BFdL}I}?BAGcCM!qH,__ĥkPszcٮ8cDլo/]pW#6B_vk!ZE98xHpX1C`P"!R0!P@dž!xLg@a;x C<X3;rhsg8;ƕ] &,վ dmA #fܱd.O6Š7xx5.7ݦvp/(N^u^cO83lJAz$\z/'&Gܯ_&^EܒJx1`KI@UQ.Aʖ3`OM32{ - z߀j!o EcD" E`I_Ř"E28v`ᐃe^RPwZ gp{_1ᠧ_DR]@0nxN*!vkN+'Ѥ˔RQJԢצD k^&R\&66~\[!7:bKwx.Ey~dx h~$ITKnMx`CK2=9_d ^qFИ=A$Ifl6A`>ۘ0RWUf@Uї sq?E%XĠܡToe'%K;jH1 X`،xƧ<ކLVTE˥}x,{;.E1R/HȖIZUrQs[@y4 g@8pbٗOh QpMρ,Ogsxvbȱyq@@Οmgj<-ڪ@G㍮ϑkxTQI9맥cem;}tF.a %;s Q&`zB#C.!%v4 zsq:ȹնc:65ؗr'_%1I->x] s*,SsP^v:. ,jna&݊[Et߼ވNIrv͔Ntm;Αufƿ5j*bVU[묄JހC[جn.ڭbVN-8ߺ[.'kۣn1Bz cԒc;C*1\Ӽ[J%ZFr.Q uxToVTޝU'^;Bi\N5PKbjJBeTڅxk'ntDŤmJO(U1E8=LK%#V\}%_ ;ev)?M+o[&MeoMA 'm۩Q+K/0`@_<0es疁 nB @DHhR§o *c@W Q OZn]]ҁeu3Lvi< XT(iYǮ:u%oƝ/G F*SY=S~+SrS.$sbZRH=e(bNԹɅ∙O;u&m:l]ősY gPx5ꝏB,Cƿ*Z DoOPmbNBs(F"18T/SKW*RbB RMBؘf.s|Gd)˘5@ /)x0Sǒ:agXzH"ǜ1a 0[H S ұC"ʡ@oL"6 bz|L09IxcVdܳ/޵5qcKjB Uy=SٸƓ%S*4.672Pd}6E5hoVj5;壞_M27)(^#MB[ n&TrI۳S{+ ^pi5k_(a̼8`S-2 QրE*U-28:a, <5 2A@$S|;˷"(%NANŗI4\ PG _RJiXV8b*z%J1Y" oMkx".>ΨQ dQ~lXuEUWD5"_d _~ny1v)aVFl:EM(4-ѵ8Vƺ Ty,|: R)2&pbGFV i2x j gs#j<鿣Ԫ6'հԛs8#lȚD迧eq- f3W SRBz-_/ Gh "шi&je\FB*RcfFk)//),4uJ[)r3- $QH8LKM<3k2xe8afy0oZRpa0{?/iUrA}*c.Ѓ "pbC 5q7DooeynOɖצCz}4 9FmAz˧3H[_{N 1DF@6y !梇3kCD!G`OݧhZn^63%Wfvݴ2}7m7dtVЯ/o“Yqڦ-PvW1 04M*^R7ʚJY#UЗ*<7._)|sAJyDo=ܓToV\ mOkp;;'t7n0 }81Ewii­t<9;=8F)50 7ǧ~f>7u=_C9qS n¢OwnsJҔͺ(1_Sݢh:SVn6g[b N||: $,}sPo73髝JSᡚbзEXlOY/jW?fJ.=xD3{,e$#Lo_ƅp?};{VK V^_f=dbln㲝D!t}70'"Ȍ @G0j4oyё>ɛlwGwE w|u}O'4N3`2![Qq,Eި|6_N<_E9` J"9]c3kXz|؂8V" R:Zm!09esv 39\H Fb%e$(\w6nhБn!Kbc;1W MF95rAOةrzVG1y*ΣGI:\G+Z\ %~ -vcMSPtZ=X񥏗 pV!R?ԟ p>q7Lk bste~$qv(K 077:{6{S ;4j˵đ IejJY)4+p+\dHVb:|nt! Z#zK|)0ֆ@1|[In >);%rXrTU w 2HXm8$s2XDž% It!R"ŜhfFKΕF\6+ S8bvgda+aۤ~ vƚ*,!*!AEE/:n6j-wSywxcA+x7}1r\Hƴ`a n eX4J1o۽n Ib@7ig|.I_(-X,%cS&f￀Dr`PLU MN0V؆Ǒ(0L3Ub¹Y {D|82) k%J>fë'OasLҗiPƽ1VpT*k,8!@^010JYEC)LKx =0Cj++G9~طn4_ijoM N!$Y`k7Q! K8橀x*(UÜ  !kw#ČFF2vd#|6AV-Σ֓Q%CqN+@@.O3 駱 Ae]^\px2Xų-q0툵2&=|xjp=kajI'˧;p7`T lť vײ'ggɚH+"QA h.<1!AvT"y~(> ➏#C9,d(L^}>PVu,q8TSEa24J!bȳK$`=2"FI\;\$T-~6()!&1ռ=e.٘,4ELx,\DLW s;'/cIݐ.' Oftm"AHxf0bN/Wt֜IXTw贼p>tJ߮(ɝֱ:~z̑=’+=_3֟8ӊπr+EOr9׎:VecORp _LPH!F>PIq &ƟsQ82ZB)ڝoXϬmw~7nڢ~]+L8@ w"f ?DdI&?݂G{uԴiQӮfv{siX &v I!YkJeZ{ ZsRǢrSƒ[a^Z u=(humбՏ-3IZn"=}BԍI BZŸs9Ho4y\~ ~{N3} x\,EWTKv"ȿ^ Ӈ&EZ ONg/Oq⓵(pKk5Wf`XdNk2R CV]l") P47>ׯw{mp6 | nG75ww 1hm4RklF|rS ssXf(W>t9M3'.fI_>TaXE[1fZqa֖S%p޽Gwjβ5ަKX"^A/xG'`\d֥cp%AȼzϥD`: |ɵb$Fzt\DB> BjW ExIQHDQB{>^HGs7fI5nV~4b2m"{rƃ669`YUmV9/YcU,O(b9^0_qcJƜܑ!e+ +sE~wRI* Z+UEgf?6qV5kZlv#a&ӈ (c!X8dG !LȹJщ V):|8`UGLSt&g4˟ R*lʴqrVӗi4'DJƋIqeDͰЯ1dͧ(y|h4؞[Hƌ 2ϋW ^.EF_OĢ9 ~_;[zw/91ga W".6:xrRVԮ1IFBWyy؜~-oܕ-?~H6!uIo|<RvLT0u-[*1"U+AFL*ܣh<~fy,pbXdl|Q@Fzssp0xxFBu jPMBZP0ڝٖpS=Z· r}aX)G%-G`'XMI.$yE*wM"dDQ<9q9=Mȴ@vVhݖ;0H8+ Q:dLx4N%#t?86ɿ[`x]!R^j/d2_F|Ďa ü@ɧoKi] pN~)Yӓ389ƨ}*&;/ eњ?%Ma&ŐKc&;#.uIXeDkt'EPiAVR`@\"2 *oᴛ X΅H,&vEaBkl2IAQ$ j`z~DDJHͶ  2/r KR+6Er^7[$9S-_1 `JUtecI%?ޖB7sޕ,sU#~=6 dř۽!OeTMtQ%|'32査;]{],M.Iw .9FXe Şv>=t_-\.j.uQqCJwLJ"z(kTLD-I㬑/:?V#_s("@=Eu6P ; P\14[twlzgo&ށiZK-'67MHtZ`pJE!"9v$9+ĺz¢|$` ^yeg=QH@ZJSkQә/{ .oH 1(DyHcJatA|b8CP{>KFk;(d+ΪTɼ-u[ɒ7\ύj8v\]Zmce:8HKHtƉ 6Hf,ȼD\҉sHZNXdk'/8Ly3Aʘ'(htu{S[69wvE{x`~CxĻjnrnەsq>q*rc_-7`IנºXvc |?v(H ac=azvXڧ߫|OP&XaB waīw& o"(.QhDX~KB$Z+31~E>A?ӪHx?ӅƯQN:1r٢{(De1,fdƳuN7}J>zq*h;/o'?{Jc!4ϵ ֱ4Xc< 3lRȳvPibQ2"ȍIs@ݘ,=*-!cӱXQC|jф¶'5I@N5~|OXJ*DڝqMN(*&mԜvAN{ND&qz[Ƥ-,[ɟBDXJ4L VT?9:$B3(QmԺI|Xk0yo7pwoV'pA;[l#&5g[ciWE(1iMهC /V{w3]t}8}mѹ?~%|<>m踱x_?o1~mvxiΓƥ+v>}88`>wx/unaZu:AnXH=X]mҤFdV܏T3%>bwq~b:0Wp_4./ϗ J@a]rh9qyh(Sk_ίNӼz{ty c5>M}8}Q3i/4\6o^6`Γۣ7ec |4ؼe#&~<=y{.?n7Kv~ϟqkƭVO5 ޻(_NNO߽,i;^\B!i]8kݶO;kj3Ϗ^'C%)LJ ik? ;k^B}oYn[WC?yi~y7Jqh^Yowq,eEp$i@iIqV@GXROyDFj^P鞤Ls+I{G)<)HL6(ZK y`$'E66v)/cwvcʚܸ_aDQ8"䍉#kE^Phkl&(E:Ha[UHLf77>m yVvޟ=h($uY ˢŠW_QE'張1k✎H3>EN |Y%$) hr̹ XEK'Z!0z᰺E F<)qsLn&l!*3pɒINi&2`2KfXqRx$wy*IFTʘu̦+ fxakrde_x}%qFS.g똾̔`꙾,z~x/*S}Q15{\a"j$ Ԃh+c"h*)T_x-1F*Q}[Iۺg%իR rr~$:0afThx 4qK3"dzBr Fx)%P*5D<ѾT(8B{,L4a\z[rT >N BZmym 9 ,)dAqIp4e2&#h= ,>nN@gZ-̈JSqq=FN.ҩ!0 b.%ӬA{Z r-y0?J626ւ**DkLMASA0ΙX`Y7liĨYO{&3L:z(6-Е2%7Zm6=I\( uhfrt]7=ĂkU`yDƦgrBRUӋdqDU+N Ԉ SʙQsGJqRqSp jp#xbkG0ñq!\ @0#͇pjQy Q=Ԩp=&z|`"z=pJkp覕2͆pE5Ϥf~je04hXɘjm'TMqHhHAN䱉5BW L=F, tu#%2Qb}o%n랩137\/F'a\ ylN%+p( ?M(aS {sWA92k7H<љdž*38:mmcVvOx=PkZߐg`Ô+24CdjN%4.mYb>J f:d=ZZ_jN C/6!E^`Bٌ>2t7O - ԋLDRQp Sh $ma+w {&1F+F^|b yբxJm޷A]Y 2}⍶m wx Sx"d\ģd\儲R'ļaqwX󒣩ڧ"IBsCeۑ|&ʦ*F5FE[[ĘN6xf_JBn"[h+b3uf*!nmuc:mh9')n;ݺDWlEMv= \)5 <6oR m.15q M)cBcRė+lPF̐u|taQ: k]G8}v= M )G:Ww̳W%:d1~9bo \CUT)+(s:lvhgi3,e ӐɌZp6kmCCzFjZq'<זjŋ7[HmC/6W dOUp}kڻ]x+RHý8|!&ܾװ hS&F7%իzZ7qwd]j-ҸŪ]Bh)ӸnW%zSC-@͂.˒gF=nw?{ߟgW@]D@ݜҟ@ѳrw~ މX:8, ӑR=HyX:h~4:$ǓnRup[%ya~rNG @kPB $GJ'hzq $(yyqҍNKbt]'%g].'PNQ*%7kioki mx]mٿ/PY@/˥y]حt7w'ܷtӺ,NV].ً7MrXk];$3Fx J#w%?؛=ERJ|x^0E+# H+;FCnPPT w3ٗ_}iiUU)etk&Q[v {=oĘk^ BrqNui$ZB$0ʼnkr1JەBY sg7w{θV&NZdEr%է>gqks!Wk kW+[j)uCBv{p.M .Ju-\?:75NLRp~8(YHj9$㔣%=8/Z>'}*YO3^ҪO8Npw6Y ӷMI1<P+vꘆk=q=x@`Ak\0o'P]9N:I#,ÍdĥlJ!hVj}Ծr/k r&^~1ؖ436MgBfVuI֩S:GڭuRN5Up:5yo2IXN{m.)BHfKy!EEg9[eW"=C!]^*C8<@I8&-ߞ2U0ۿyx7b*~5{sǢ# p͋g|w4Q˛osT 3ywg?/h :"?O\~g#Tk4 u6<[ åZ}A'6\qwu ݪ2 b'$Dg/v}|{#)tLqb |Y āK:f1?&ldՉ7Q>T R]'Tg3$Ş_mT>k׵FZ"3^RxNxN)B_|c 4;1_|Ōf8U #Yr} !|BA .SO058܀ SL <-uS>Q(4j D:J ]_ƕfmqٰ4a7m_J7}IWCʋ;`vS!-/]HQF݅br/?l.ts/~aru(Xȵۮw)]sM~з=꿸jB䨎 rۗS&DKw"Gh`ņMKc Y$鋟h6Pzg@;tЎs.p>(.h.5[Uef`H_7+^@&uM3mŸ+{yQE]{MM8s6}emRyhO'km/')x'o_xۻ^O2{cpoz$:MȲtڪ|'k}Ybҕ: />}uՋEB8B IfUX]b(Orq x&B`ש({60`*7tǥZ} iLc %ZvmVN"YFfFб4`z `QOaz!]hSD> V_@rxDݜ5z3& yXyMyÞH1a {%(s V\Z "rsF‚g23:̥^KMwgVf{s#䤵-S^ZRiE3.&KaX  o+=9G"tL puyM?^_JPQY˗_0 ;_mɯ@Q/1~e'Gw X,okдcoAk&& 0F4ް8`U1.y|ETy1ٟ5- ͳsoǗq3 @`kyrV$%3:`}^$W>!-jS_;5{vn*eS]'!q1F,X`YI.Lp8\mGF҅čV\@ʕCDw>/RpE-.P??nȓW%+X#. =F[Uj?ی}<9Z| ivu=_9)uSz9fL”,]Ͽ_6W6 6RS 8l tsqACG9sѭg/sZmwɏc޲vݐ I#*+cB vQ,Y"a*ԎFA$X.$9'ƃT!2g$>VX^w#Mjr4 % MZj4*+Z`j8?DNyCn_I4Zȿ~bLw!@{v%o ha;nSBy.5AW՝]nEj8@o3qDC4\n]OW=NBe$h3]n3\茦uMṲdychu|G.j^_L~0L-l:T7d(VQ-^9>@]PܻU;t ~j~w**7aV쯯ԨzP"zvzk3*սP}F V0bGPqVľ@ڔ+֍e?}鏻(ɏ5n/~nqT(45EB"qWK~|>ˎ]/U PV2yh YG;GmsCg g+!gBXtd0Vs$`GR1.% K |vKؒ:勔)BN;*=]y^))-r)JS}\*uFRn,BN<˗vmx/9)t}VB9bĎ;޷u9?.>{45%_u&[7% ^t @γw*IKy B|5(+: Ҏݲ=p~UCiyi Lrnr4d¨y5DQߨ:F AؔiaDoF?P,5 ^I͵[v7_" ?/Ŝ7h\ ѫ5hCH"ZP(=`R(1t*? =?; 3Ľ틎'?]&Dn܈ԾwN_z音d~Wh*U 7*Jl ' $C6maVX2j([%Za9=Qô 0C=h_^?돿Y.C۷)!r֘U{kJŔI{jsP0OE@*YmaRQDݰC%wϠPXZqԮ5,ޜ.1ͥ]Zz]5n%R]0L(OoouEkѮ7ewUn䮚|5kW~Up[f eT03K9wvNam 2 z1r 8m5w~ Z^ J.h=rt$ͮ'ge >ve%+ΠJ gS >cWޣPRv(]*{!~IaVW^ysRr-&_,QCjf%RUu|e_yS!ؘe =204/*I\G}\淚R) MpP+͆wl\L^R]5袺+9:,8,*p+n*$[wq 6Œa)]8~mo.|gDݻ`RpȬqvCOk=P |JF_[~XP:͵hmiQ'Pw,+~,Z3Ǣzj/3Y9)m,׈ȮGdtWp2ZFkѝ-5y 8EgbRG j=Iv\g I :JCϙK98F$˭|{'%cӴ|ܨM;TBqU?95o*7a+A+G 0,]%*>+ZSv*DUp=!TIUݻUr# W}R.(J M/d b0L`M[dǵEk񷪔V4xLhK,sE{A81V9HcNˬ m_7_@`F4fKR61zE 6M6ZJ|&i"rŬ0]U\ ?z>@V!Xs"u#:ȕ4iJ255' p+7Z/Ip..+9EZ\pFbжL;M&⍌ďp-LkD/Į|<-*ʕno*W}dkt."[vǻN`J`? >n GyݎUj?S},= 9|ď7糋_,rkԴLɹnpd: \D}.4 ]+ vkƳW$swݠXC=zx4g7oG8HYVβ'%ᬍN{p) FcbJ&u簲E%1ѼvT' ȁ@Qaf;) "za30.\Ir d%䔴`)Gbͧ<_OHEu{3Q79[a*zRHz 66D +i2܌XSGQ!nZe])YNYMo3aK0;=g@%%Pr&AdҐ"%rL&I(DbdmJ.Ek^RgHN*k_(-YK9U>15\wcǏҏ:Ltf{Em7R~~G أӳQvPutK^Sw`bIŞƌ3z߃ӘQvN $i٨x]~KrHż򣛋g_C\[VS(t敗1;1c"Z\Gɛ`&3#&咍֩$VNt៰ Su^3伪GEY唘,W<[fyBmN *db K.CToZ΅j͈Tp}T& %t/pwh1B>_K`\Žzˏ @2 boye Φ\_4o:nI(ZeLrZM;91_yV렔 ,J:1h.zt>1'ۨU1yvKK<' DHőlZU^L7A[Z:&,r1I"HV{b7IoᑻM^$ȾN' t6VA+bY)1(Ff+|Py+A%Z51kB5 L'D\:$Nl5>L˚n ^.o9Z 0 Sp[g0r1'Kw0H33, */*̚@. +{9p9E#,6ikCf$9$'s'ΈE.}|x$ѸɅ#҅%l͓'ocS3b6㝵Xe\/i>l><~ 3ew_'q'p9yqE oJ#oKuϹ㗓m؃~`/g%@cKm\,4@ BҪ%?,3Uqd y1ٟ5gI6+,}ϖ8GR[K5 dP$ÎWz R!q&@wt9iiAD9Z᫭HzSyu?HK0$[Z_ nZrJ*? |s[ǖi] A|v\ %)ŪĶh'`:cm|Qx(00msŖ$o☋$18YĹ ڈُ&RClag-@9&n*$}/͕vc٩hA?xk^|F1\x kϼ玲ן51_HK&NŎOkT*Wl)9 wg B|3ajn"Nq@XY?_àLJ'ëQ\ N8}sp{'`Y:_c),U^it<2z z{ExE+9}&'\^zv_Ώ3݉d|<tf燅7ODi7?nؠb|;M7?3oi؟ 〿8&\lL:_$qd0x2Er _ɯ߾O SPAOb9s}7R{̍,֍,<^`!sWؙ#`]2[Jh>;0U.=xbwcpC؃;}p%N ,q˜Bx4V~GYRGO˜:Yݝ)9̉ԝ89s#>2 AvO;gn:vס]϶?|ze4Jz ۵y(9K~ /EPΖgF0 JlȷgHo7&pkk.OhMZM`-m0i^G6mD ]nŲ?vOJYWjZ o6͂^M\;u)Pl-m ufZ|>SjEKX gzs+]*b)3>P +תd+* Z<`]zۍ;eB앜2/.е@!QSR ˜,%չp|1ۚ2T(E{e3LTAxac]dY[ClHnE!'^<$jFb=EyXooɟ<,ï K!5uf'VcI+/[٨o.^9qJ(SOZ/ ~ҺN^=>:ne)G$M7;XrE!Ѻ&9Z[^1 b5%jSzd ](䷹^]F] Ux?I}?T.UUv)VOZŨA k Jޒd`' ] `) k XP}[?Yi!""1hYY;"|QIXHaŭ`p՝% |9V[sN#qjlK.wC#-_8KYAe K51w,{y@"9Όq L) 8.hVc΁*AVS$Z - |!݉w<62*- "8}M8:6AH,8L]e)!s6äE>nj*Y.l3-BDoxK'`#`18{;$=Ŗ0NF~n>:}f;ECXʄFT!0e`dRjhc(LН*IT-%L8)l29d 71\r &K?^r<~,՘閝d c}o^䌟 9SDG w<fmDS St8"sŲ߀R2u_|B QkFWX:t|dx=U~^^iE d([yU9vT1.n $_>`ޖq!0DÅ3!2!|7Kߔ&!GU)Tċg{(*MpJ4*Q;er)(|t4]HwD5cA]>`Y]OfKHonFF_>Y D+JGkZ%{WZ`OJ6xK4ﮂ]xpc%Wc]Qeo,À[%SkUDcY ŜM$X45ja5]\u1..|Ktq|x#fUDEɶj2Ԉ.bcK[jUͨ X"̷ٞDaG׽鯊(I9Vը˷bDr%mQ{uVʇoUkU b'jV1 K%Ȁ% =REPL [Y* B wfW%je' Q\jWWQ7%UZ"HQQpDC-F]^h}R]_ir;;U3L*B$-RtTR=1G;X;X`a0r*.mSWZ#vB- 5=M5^)H!*BU1&9q0ohID\?B[ n[. rq*Qdd\yYel|ZKa,(҆hB\ ]d`؏GT_80z17O&o#rSvie1M'Kxg@L՟On x<蜞b"O9LwW(5DX'RrȯX# v헳:H.#.no=˜" gQ- n3g;!Ԃ%m"4=⾌7v#̍m4uhYQ8#s%L|gG)5rNI["kQOkVSg:%ZG.!YM0Q|w+4no|SRd.#p*@L̹ @L[bC`$ (SI)ˌERc˗ Q 8_;O! N98[?r#/3bB Y c-r,-8s,YJ0`LzX({j0ml-u_EL0(ǯ O` G0`08v>cRkajc)8(Xxb^s+]`& |F6?!rJ L+0Dtԗ Z7w4aU2q$1b@Y=N[^N5!2pO5~^t*@ߏ9+>OǀaVYeJ|e1_qq!yTN_㺢7К ƻQiqq?Fig cVv%P 0CLO&] C%[N(AM㌢;X:HÔKXۚ m{D .o'G2x,|0^mzlE~TlK+'bMȠw>qXx :Fklm4+Y0rS)w0,tH7 [5?o\Ma00)zk^Au쑝2wtrǰ$DTY JbQI5S T~8O^|:I3(#;:ީt \n"s'2::fg9"\"ȡ` }L1Q MٺL@ȷ3,cZG{'E9&Hۆpc<"`X&ɛ|Se/עa02HѰ1X4{c_X35Oo}M kJ|bBbB )>sEiz]3ЯALϘ/$!GmzO "M,'7Fba\e6RtԲ1>J"ҶҵVD#uhcŧk [b}kc%]I7HmlAGzY>!Lي鳱?J 2u#Ί6^m$]m66q|*L'[ 6T Xr-Hz?f>ΨxnM:ns21I>kdAշFAZ3~ܖSC# \R#ap 'XB:N(9sldsxc'16%-tX/L;`{]׬lJ)IJʒԭY`sXӵT!:l2:y1>[-3@h+b/zP=d O&lZy@_ҪkK-Ӗָp3; qDRM4<\jS/6f<(!sd&uZI`ݭO1.{wkTj$5$aẺ!0s+l(3adf3Lal`v0\˃pq_ض<Đg2qtC<Ӗԓ`,,C`l#WW*(""\åH*}1ܧXU\8$$NS.)>K N}t5-}>.*9:0*9֤YT8Kfj돭}(-a]s cף1"j=IϰMiY=}pQP} gwK ʾ[}nf _ZS';fw>X_&Cԟןc^㯝T_;Nh;g w2{x Wh&r:ev*V?}96m͝+T/fRGTtT?KTHU2$}QQGw<(2L8T8ɌU}Xfq0gl(7vWZ5s{t*5#+RZzQ~d2ڭfϷ.s3NVN7uѬ}twh&D0|90|\:r79|z7yȥwܣgع?%!jUvj~Һ}>t[9޹soV͑B)>n3tίZP*r'ǫVfpU-j[ M -ްsxpS+jN\rz*nG< pHeක7n=nwQNKˋNƹkdPѠq:!R~ιyuy1+'~`v_>b^?nۯVr.><57v'"k+ z'XV{xtǂAWz74{+&>GEPGܻ^r~> RͰܑm`fmg᠍:J{DoNs^!=HNw3k*AL<5;a ;;Y큼<\"`j}ޔO.DZ*nD3@{j;?ˆٸkN SAx~_$)5 w>K6j=ܨs/nðy3+cӹ;me[ݻ JUAZ4êSwhˮ]ǁ]7հqqoxypb{ B?o{W'̓侓=RGŠORW?me̫Qi7h\à{~^1sQIeOztX?_΅ȽݧOwjo~;{Z&^=oZ~KkуozQ!9S_Mf76,nv=S;Q3+l~Q iNmR U+Ї5w};jfN"&fк xϿWpF8|wf0ԛxnBgD=2[>9|}Wp3zDý6zޗK"oLY)FVݞ-0[g~n3+;[}7¶gDocx'm+?ni3dlG%kļ1IL3YΩ-*_^.9Ҁ'h\޳1؛1 2#&RT貲/;||gþ[/O?Y7ǂx󟟽mw&mw9'L)qd0xN}uw-ej/mK&B^)~+x3C8S V[2e M3.XVXgj%ɉGO}3^%Ѵ.H?Ym@><;ac&P%L%--5.. dW'WMU,Q):ୣ$Ş]Բ"q@I7p Z).8'1ѓᘽPf5XYGUb'$u6/|0hRDЎDžJ(DLqH,"K ИBpqgCG3is_}IG7!EcL[rbG" xm%m1O~(OU.%)-MY%ZdړīlMRdm9CI>Y[Svb eC ouAL}N2pOg/{/gB@;iC]oȗ)P,]]} !(Lȸʤ VEg\gߟ|){?~I?̌V̎f*[hsEZ<t2#ɔu{Ow۳3΍2;ZS [r9BJo z۪V5>EQy\ʳ~ItnSs6 c$h`'MTYz7b Biwy^o-Knl9@yP*rvZ0giy_ ʻ j|Z8"ofeS +{?M%@+bn‘L癤™qT\?}iu*5AƻuAU|(m$5gᚊ͇ } On/:۾V 9.g|>lC_"-{ڬ@6p#|o[·_0{,Nr{'<}oߤmzoq4&*N2N乊{],bc,8mn9zxf0fM=y1"Tx4GX3x=k5vmG=4׽q86I{w}&޽6}۟6Qݠ捱q.yvx>pZ̰lt&g>rdw #qHmRGۭ`>uӁ$\D_,SVv!C Od髿NVdaLԅyu]^_\^}l^ם7l9h}H}7_93滷Ūg4ؙՈzG1G4^R W &pd2͑y9`l)&yFz}w*RrGiɚ& d\yv %&mH6N9=AR>7sV ͑v;͝_Mϳ?ƿ)'[vԛ OǼiO֓lzXz}u~O֐ZV#rHݧҙ?Wo>l`2݋32U5].ǀL~=wn|wgj{ CȞcͽ25/:ÊT,X^w~q2g7-8v5ZQ Z|^5c^sb~DžxUnOGSubb8@Zп,$07v)ᆭST$P62yRiW7Mڴ2ߜ\z͞ (#b4Zj{kwD?(6% ,//L)Ʀlؚ- ɹd7[9F.ۼٓ* 2Q- @f.vc]_p|\+gVFtiuZ ߿-?֋CO|2IL *l /0v:<#YZ!'hunJɟz|-*5İ>)GB Ex[8xYX R)4O1y:-ڭ:y(Ռs@,!ZbZ@\EڻG4*jw!jĞ(aƅ?DLRG–]BIlQFeJD~KpAѫy4 c@TZ'kwZD||Mot*hr%73ci 헐OYmS *4YxdK0R3[|4Y$V w1#UR35Zijlų;+>7Z 1Icv1\Z^sOfx"Y1`akX/ 94 :GdP?#=%iAzݺabBu].r LJ^yxr,mh5"w 5| I9>2ZIyxM`&0dkZW^77a^z~Ǎ1hI΀myQ@o5~˲qq=>`a4l9Y-9YATJ'y-͝5{D{CVsn5us!7UՆ'r 3zPKt&-^@O^7@7 Տǡ;m0vsŗރӝ^H(#k_}t^%xhȗq,ZX땔0#s!(X#Am5PiK2XU:9!},`HN!!OhmY:k4yY%jb h=#rĤvDbx0wW+k n O*u壳}b~YD"IЅ ]fJ!wGȜO 6t-IGmRv^-jZ9hOC8& o/|7CuRݯij{rT\L?,.%kxs DXN*d){Nb|Қd^#u/z/f(҄&ߧ/g9EwbP llg t-PjjMhz,B1!aa}hP\E( 1*"!"Dr2Tc]@`߯"HUyR^`zIN"B&C XnWӋUn1LBYPX:_>/ t uDu*jL2)[t =uFI99s^_/0Ye Q cs#Hh8YHЂ7)Zxu}+5| iRXX(rI ԫWDzc,||<F tWٖFX虐ki%bNpu@h,9a_WLK9nw@A":S>[%~4)mkke^c%YPEz!f#iƻhu|bJ>SKк DH@lw[{@L3rfQ](OP#AJI(J()G * Y_X 3վ^AӀF,FR Ĭ y@L/Ҿ?-'^ʺvi BqU"fB#9'ijD,0`ɐ@r@PN$ᝉ P]*4Lj>IvQc `|!#s&{K.%A&$ @%~;q PDmNõui\mc1a4B tz嬜ԪgJk)̔|Б9(*tnVl}WcEvGG+GOo/uiSX Z4RS_U$pkim'&^6=!UiҢ^Ưqۦ̮$>sb~4K~>E/-͡Kt֧K~Ǥ/Yg,OuJ7E-މ+c\!̎ǺXťMV6eH#Tc5e8|o uUkLFUNN0(YdPYŨS8M hZ[†l/}ܼ'بf\ ZzY0].Ԗ1| Xu5LxLY{Jxugz!GQ>tںoD7hCtՁ>#$1786ߐyF:<])y͸2ܟ=ΖqNw:x3>.MXE pT>ڶׄ#%q&ɞȞ,;SdOʓeGfe_|wx?Tr"/9nvar/home/core/zuul-output/logs/kubelet.log0000644000000000000000003363770215140242052017703 0ustar rootrootFeb 03 00:15:06 crc systemd[1]: Starting Kubernetes Kubelet... Feb 03 00:15:07 crc restorecon[4679]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:07 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 00:15:08 crc restorecon[4679]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 00:15:08 crc restorecon[4679]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 03 00:15:08 crc kubenswrapper[4798]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 03 00:15:08 crc kubenswrapper[4798]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 03 00:15:08 crc kubenswrapper[4798]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 03 00:15:08 crc kubenswrapper[4798]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 03 00:15:08 crc kubenswrapper[4798]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 03 00:15:08 crc kubenswrapper[4798]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.666689 4798 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675370 4798 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675402 4798 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675410 4798 feature_gate.go:330] unrecognized feature gate: Example Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675417 4798 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675423 4798 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675429 4798 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675435 4798 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675441 4798 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675448 4798 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675455 4798 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675461 4798 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675478 4798 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675484 4798 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675489 4798 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675495 4798 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675500 4798 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675506 4798 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675512 4798 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675517 4798 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675523 4798 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675528 4798 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675533 4798 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675538 4798 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675544 4798 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675549 4798 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675554 4798 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675559 4798 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675565 4798 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675573 4798 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675580 4798 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675586 4798 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675593 4798 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675600 4798 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675606 4798 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675612 4798 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675618 4798 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675624 4798 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675629 4798 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675635 4798 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675640 4798 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675646 4798 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675677 4798 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675683 4798 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675690 4798 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675696 4798 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675701 4798 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675706 4798 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675712 4798 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675718 4798 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675723 4798 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675729 4798 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675734 4798 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675739 4798 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675744 4798 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675749 4798 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675755 4798 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675760 4798 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675765 4798 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675771 4798 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675779 4798 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675785 4798 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675791 4798 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675796 4798 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675801 4798 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675806 4798 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675811 4798 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675817 4798 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675822 4798 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675827 4798 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675832 4798 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.675837 4798 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.675944 4798 flags.go:64] FLAG: --address="0.0.0.0" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.675956 4798 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.675965 4798 flags.go:64] FLAG: --anonymous-auth="true" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.675973 4798 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.675982 4798 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.675990 4798 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.675998 4798 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676006 4798 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676012 4798 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676019 4798 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676026 4798 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676033 4798 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676040 4798 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676046 4798 flags.go:64] FLAG: --cgroup-root="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676052 4798 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676058 4798 flags.go:64] FLAG: --client-ca-file="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676064 4798 flags.go:64] FLAG: --cloud-config="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676070 4798 flags.go:64] FLAG: --cloud-provider="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676076 4798 flags.go:64] FLAG: --cluster-dns="[]" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676086 4798 flags.go:64] FLAG: --cluster-domain="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676092 4798 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676105 4798 flags.go:64] FLAG: --config-dir="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676111 4798 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676117 4798 flags.go:64] FLAG: --container-log-max-files="5" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676126 4798 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676132 4798 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676139 4798 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676145 4798 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676151 4798 flags.go:64] FLAG: --contention-profiling="false" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676157 4798 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676163 4798 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676170 4798 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676176 4798 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676183 4798 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676189 4798 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676195 4798 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676202 4798 flags.go:64] FLAG: --enable-load-reader="false" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676208 4798 flags.go:64] FLAG: --enable-server="true" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676214 4798 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676221 4798 flags.go:64] FLAG: --event-burst="100" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676228 4798 flags.go:64] FLAG: --event-qps="50" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676234 4798 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676240 4798 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676246 4798 flags.go:64] FLAG: --eviction-hard="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676253 4798 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676259 4798 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676265 4798 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676272 4798 flags.go:64] FLAG: --eviction-soft="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676278 4798 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676284 4798 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676290 4798 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676296 4798 flags.go:64] FLAG: --experimental-mounter-path="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676302 4798 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676311 4798 flags.go:64] FLAG: --fail-swap-on="true" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676317 4798 flags.go:64] FLAG: --feature-gates="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676324 4798 flags.go:64] FLAG: --file-check-frequency="20s" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676330 4798 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676336 4798 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676342 4798 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676348 4798 flags.go:64] FLAG: --healthz-port="10248" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676354 4798 flags.go:64] FLAG: --help="false" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676361 4798 flags.go:64] FLAG: --hostname-override="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676367 4798 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676373 4798 flags.go:64] FLAG: --http-check-frequency="20s" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676379 4798 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676385 4798 flags.go:64] FLAG: --image-credential-provider-config="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676391 4798 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676397 4798 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676403 4798 flags.go:64] FLAG: --image-service-endpoint="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676409 4798 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676415 4798 flags.go:64] FLAG: --kube-api-burst="100" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676421 4798 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676428 4798 flags.go:64] FLAG: --kube-api-qps="50" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676434 4798 flags.go:64] FLAG: --kube-reserved="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676440 4798 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676446 4798 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676452 4798 flags.go:64] FLAG: --kubelet-cgroups="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676458 4798 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676464 4798 flags.go:64] FLAG: --lock-file="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676470 4798 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676476 4798 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676482 4798 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676492 4798 flags.go:64] FLAG: --log-json-split-stream="false" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676498 4798 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676505 4798 flags.go:64] FLAG: --log-text-split-stream="false" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676513 4798 flags.go:64] FLAG: --logging-format="text" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676519 4798 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676525 4798 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676532 4798 flags.go:64] FLAG: --manifest-url="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676537 4798 flags.go:64] FLAG: --manifest-url-header="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676545 4798 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676552 4798 flags.go:64] FLAG: --max-open-files="1000000" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676559 4798 flags.go:64] FLAG: --max-pods="110" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676565 4798 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676571 4798 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676578 4798 flags.go:64] FLAG: --memory-manager-policy="None" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676616 4798 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676624 4798 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676631 4798 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676637 4798 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676670 4798 flags.go:64] FLAG: --node-status-max-images="50" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676677 4798 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676683 4798 flags.go:64] FLAG: --oom-score-adj="-999" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676689 4798 flags.go:64] FLAG: --pod-cidr="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676695 4798 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676704 4798 flags.go:64] FLAG: --pod-manifest-path="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676710 4798 flags.go:64] FLAG: --pod-max-pids="-1" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676716 4798 flags.go:64] FLAG: --pods-per-core="0" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676722 4798 flags.go:64] FLAG: --port="10250" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676728 4798 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676734 4798 flags.go:64] FLAG: --provider-id="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676740 4798 flags.go:64] FLAG: --qos-reserved="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676746 4798 flags.go:64] FLAG: --read-only-port="10255" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676752 4798 flags.go:64] FLAG: --register-node="true" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676758 4798 flags.go:64] FLAG: --register-schedulable="true" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676764 4798 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676774 4798 flags.go:64] FLAG: --registry-burst="10" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676780 4798 flags.go:64] FLAG: --registry-qps="5" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676789 4798 flags.go:64] FLAG: --reserved-cpus="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676795 4798 flags.go:64] FLAG: --reserved-memory="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676804 4798 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676811 4798 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676817 4798 flags.go:64] FLAG: --rotate-certificates="false" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676823 4798 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676829 4798 flags.go:64] FLAG: --runonce="false" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676835 4798 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676841 4798 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676847 4798 flags.go:64] FLAG: --seccomp-default="false" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676854 4798 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676860 4798 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676866 4798 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676872 4798 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676879 4798 flags.go:64] FLAG: --storage-driver-password="root" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676901 4798 flags.go:64] FLAG: --storage-driver-secure="false" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676908 4798 flags.go:64] FLAG: --storage-driver-table="stats" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676915 4798 flags.go:64] FLAG: --storage-driver-user="root" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676921 4798 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676928 4798 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676934 4798 flags.go:64] FLAG: --system-cgroups="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676940 4798 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676949 4798 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676955 4798 flags.go:64] FLAG: --tls-cert-file="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676962 4798 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676969 4798 flags.go:64] FLAG: --tls-min-version="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676976 4798 flags.go:64] FLAG: --tls-private-key-file="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676982 4798 flags.go:64] FLAG: --topology-manager-policy="none" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676988 4798 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.676995 4798 flags.go:64] FLAG: --topology-manager-scope="container" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.677002 4798 flags.go:64] FLAG: --v="2" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.677009 4798 flags.go:64] FLAG: --version="false" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.677020 4798 flags.go:64] FLAG: --vmodule="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.677027 4798 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.677034 4798 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677168 4798 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677175 4798 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677182 4798 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677188 4798 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677194 4798 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677200 4798 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677206 4798 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677212 4798 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677218 4798 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677225 4798 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677232 4798 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677237 4798 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677243 4798 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677251 4798 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677258 4798 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677265 4798 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677270 4798 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677276 4798 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677282 4798 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677287 4798 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677293 4798 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677298 4798 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677303 4798 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677309 4798 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677314 4798 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677319 4798 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677325 4798 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677331 4798 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677336 4798 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677343 4798 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677348 4798 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677353 4798 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677358 4798 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677364 4798 feature_gate.go:330] unrecognized feature gate: Example Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677369 4798 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677376 4798 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677383 4798 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677389 4798 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677396 4798 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677403 4798 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677410 4798 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677417 4798 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677424 4798 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677431 4798 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677438 4798 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677449 4798 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677454 4798 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677459 4798 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677465 4798 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677470 4798 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677475 4798 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677480 4798 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677486 4798 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677491 4798 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677496 4798 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677503 4798 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677510 4798 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677516 4798 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677521 4798 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677527 4798 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677532 4798 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677541 4798 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677548 4798 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677555 4798 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677561 4798 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677567 4798 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677573 4798 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677579 4798 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677584 4798 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677589 4798 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.677595 4798 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.677614 4798 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.691012 4798 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.691072 4798 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691156 4798 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691166 4798 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691172 4798 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691177 4798 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691182 4798 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691186 4798 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691190 4798 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691194 4798 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691198 4798 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691202 4798 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691206 4798 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691210 4798 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691213 4798 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691219 4798 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691226 4798 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691233 4798 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691238 4798 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691244 4798 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691249 4798 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691255 4798 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691260 4798 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691265 4798 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691270 4798 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691275 4798 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691279 4798 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691283 4798 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691287 4798 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691291 4798 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691295 4798 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691299 4798 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691327 4798 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691332 4798 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691336 4798 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691340 4798 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691344 4798 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691348 4798 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691351 4798 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691356 4798 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691360 4798 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691365 4798 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691371 4798 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691377 4798 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691382 4798 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691388 4798 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691392 4798 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691397 4798 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691401 4798 feature_gate.go:330] unrecognized feature gate: Example Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691406 4798 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691410 4798 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691414 4798 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691419 4798 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691424 4798 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691430 4798 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691434 4798 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691438 4798 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691442 4798 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691445 4798 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691449 4798 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691453 4798 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691457 4798 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691460 4798 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691464 4798 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691468 4798 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691472 4798 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691476 4798 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691480 4798 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691484 4798 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691488 4798 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691492 4798 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691495 4798 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691500 4798 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.691507 4798 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691682 4798 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691694 4798 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691699 4798 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691703 4798 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691707 4798 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691712 4798 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691716 4798 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691720 4798 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691724 4798 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691727 4798 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691731 4798 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691736 4798 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691741 4798 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691747 4798 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691751 4798 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691755 4798 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691760 4798 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691764 4798 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691768 4798 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691772 4798 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691776 4798 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691781 4798 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691785 4798 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691789 4798 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691794 4798 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691798 4798 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691802 4798 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691806 4798 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691811 4798 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691815 4798 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691819 4798 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691823 4798 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691827 4798 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691831 4798 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691835 4798 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691839 4798 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691843 4798 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691855 4798 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691860 4798 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691864 4798 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691868 4798 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691872 4798 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691877 4798 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691881 4798 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691886 4798 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691889 4798 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691893 4798 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691897 4798 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691902 4798 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691906 4798 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691910 4798 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691914 4798 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691917 4798 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691921 4798 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691925 4798 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691929 4798 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691932 4798 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691936 4798 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691940 4798 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691944 4798 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691948 4798 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691952 4798 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691955 4798 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691959 4798 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691963 4798 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691967 4798 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691971 4798 feature_gate.go:330] unrecognized feature gate: Example Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691974 4798 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691978 4798 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691982 4798 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.691986 4798 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.691994 4798 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.692209 4798 server.go:940] "Client rotation is on, will bootstrap in background" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.696937 4798 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.697028 4798 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.698759 4798 server.go:997] "Starting client certificate rotation" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.698792 4798 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.700862 4798 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-18 00:51:34.419198926 +0000 UTC Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.700935 4798 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.728291 4798 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.730847 4798 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 03 00:15:08 crc kubenswrapper[4798]: E0203 00:15:08.732279 4798 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.744880 4798 log.go:25] "Validated CRI v1 runtime API" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.783620 4798 log.go:25] "Validated CRI v1 image API" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.785898 4798 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.791422 4798 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-03-00-09-56-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.791465 4798 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.811829 4798 manager.go:217] Machine: {Timestamp:2026-02-03 00:15:08.809462213 +0000 UTC m=+0.575452244 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:d689d10a-78fe-472b-864b-496c283a966b BootID:a6f95815-93e8-4b6e-91cb-e8db1e0a2c7b Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:52:11:9e Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:52:11:9e Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:7e:03:8b Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:e8:20:82 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:58:6d:b9 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:e2:83:ca Speed:-1 Mtu:1496} {Name:eth10 MacAddress:e2:1e:53:40:9a:db Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:be:04:1b:2a:8a:bb Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.812082 4798 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.812446 4798 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.813783 4798 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.813995 4798 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.814035 4798 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.814253 4798 topology_manager.go:138] "Creating topology manager with none policy" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.814263 4798 container_manager_linux.go:303] "Creating device plugin manager" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.814753 4798 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.814783 4798 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.814987 4798 state_mem.go:36] "Initialized new in-memory state store" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.815479 4798 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.818766 4798 kubelet.go:418] "Attempting to sync node with API server" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.818798 4798 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.818819 4798 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.818837 4798 kubelet.go:324] "Adding apiserver pod source" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.818871 4798 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.823881 4798 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.824826 4798 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.824893 4798 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 03 00:15:08 crc kubenswrapper[4798]: E0203 00:15:08.824941 4798 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.824876 4798 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 03 00:15:08 crc kubenswrapper[4798]: E0203 00:15:08.825021 4798 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.827309 4798 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.828859 4798 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.828887 4798 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.828893 4798 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.828901 4798 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.828916 4798 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.828926 4798 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.828934 4798 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.828948 4798 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.828958 4798 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.828967 4798 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.828986 4798 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.828995 4798 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.837024 4798 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.837828 4798 server.go:1280] "Started kubelet" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.838976 4798 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.838978 4798 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.839637 4798 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.839880 4798 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 03 00:15:08 crc systemd[1]: Started Kubernetes Kubelet. Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.840975 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.841007 4798 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.841102 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 10:40:27.45097039 +0000 UTC Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.841163 4798 server.go:460] "Adding debug handlers to kubelet server" Feb 03 00:15:08 crc kubenswrapper[4798]: E0203 00:15:08.841295 4798 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.841476 4798 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.841502 4798 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.841588 4798 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.842163 4798 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 03 00:15:08 crc kubenswrapper[4798]: E0203 00:15:08.848314 4798 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="200ms" Feb 03 00:15:08 crc kubenswrapper[4798]: E0203 00:15:08.842243 4798 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.850146 4798 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.850382 4798 factory.go:55] Registering systemd factory Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.850402 4798 factory.go:221] Registration of the systemd container factory successfully Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.858474 4798 factory.go:153] Registering CRI-O factory Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.858512 4798 factory.go:221] Registration of the crio container factory successfully Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.858550 4798 factory.go:103] Registering Raw factory Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.858574 4798 manager.go:1196] Started watching for new ooms in manager Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.859338 4798 manager.go:319] Starting recovery of all containers Feb 03 00:15:08 crc kubenswrapper[4798]: E0203 00:15:08.858911 4798 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.150:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1890944c441d333a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-03 00:15:08.837794618 +0000 UTC m=+0.603784639,LastTimestamp:2026-02-03 00:15:08.837794618 +0000 UTC m=+0.603784639,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.864713 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.864782 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.864803 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.864821 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.864839 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.864854 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.864869 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.864883 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.864901 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.864916 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.864930 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.864946 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.864962 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.864979 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.864996 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865014 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865029 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865043 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865058 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865076 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865094 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865115 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865134 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865152 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865167 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865183 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865200 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865219 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865235 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865271 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865289 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865306 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865321 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865366 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865381 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865394 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865411 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865425 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865441 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865455 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865470 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865484 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865499 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865516 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865532 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865566 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865581 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865597 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865612 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865627 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865665 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865681 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865703 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865763 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865784 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865799 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865843 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865865 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865881 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865897 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865916 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865937 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865957 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865974 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.865990 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866005 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866020 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866037 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866052 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866070 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866090 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866108 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866129 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866146 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866160 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866175 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866196 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866215 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866230 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866253 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866268 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866283 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866298 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866350 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866373 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866391 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866406 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866421 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866437 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866452 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866467 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866486 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866503 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866521 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866541 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866558 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866573 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866594 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866610 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866625 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866640 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866676 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866691 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866707 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866729 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866745 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866762 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866778 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866798 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866814 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866829 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866846 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866862 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866880 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866898 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866912 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.866927 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.867008 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.867022 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.867036 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.867051 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.867065 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.867079 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.867093 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.867111 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.870670 4798 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.870743 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.870767 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.870783 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.870795 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.870837 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.870850 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.870863 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.870875 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.870890 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.870904 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.870919 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.870932 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.870945 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.870957 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.870970 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.870984 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.870996 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871012 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871025 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871042 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871056 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871072 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871084 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871097 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871110 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871123 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871136 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871148 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871161 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871188 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871203 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871217 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871230 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871245 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871258 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871272 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871287 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871301 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871314 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871328 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871342 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871356 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871373 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871388 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871402 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871417 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871429 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871442 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871454 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871468 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871481 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871501 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871514 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871527 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871539 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871554 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871567 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871586 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871600 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871615 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871630 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871645 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871672 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871687 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871701 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871717 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871731 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871744 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871759 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871775 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871789 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871804 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871817 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871831 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871845 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871859 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871873 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871887 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871901 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871915 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871929 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871945 4798 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871956 4798 reconstruct.go:97] "Volume reconstruction finished" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.871964 4798 reconciler.go:26] "Reconciler: start to sync state" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.882290 4798 manager.go:324] Recovery completed Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.892895 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.894374 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.894424 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.894438 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.895604 4798 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.895623 4798 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.895643 4798 state_mem.go:36] "Initialized new in-memory state store" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.904838 4798 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.906832 4798 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.906887 4798 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.906929 4798 kubelet.go:2335] "Starting kubelet main sync loop" Feb 03 00:15:08 crc kubenswrapper[4798]: E0203 00:15:08.906992 4798 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 03 00:15:08 crc kubenswrapper[4798]: W0203 00:15:08.907619 4798 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 03 00:15:08 crc kubenswrapper[4798]: E0203 00:15:08.907725 4798 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 03 00:15:08 crc kubenswrapper[4798]: E0203 00:15:08.942553 4798 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.947101 4798 policy_none.go:49] "None policy: Start" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.948324 4798 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 03 00:15:08 crc kubenswrapper[4798]: I0203 00:15:08.948354 4798 state_mem.go:35] "Initializing new in-memory state store" Feb 03 00:15:09 crc kubenswrapper[4798]: E0203 00:15:09.007270 4798 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.021918 4798 manager.go:334] "Starting Device Plugin manager" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.022001 4798 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.022028 4798 server.go:79] "Starting device plugin registration server" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.022627 4798 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.022682 4798 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.023287 4798 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.023437 4798 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.023450 4798 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 03 00:15:09 crc kubenswrapper[4798]: E0203 00:15:09.030161 4798 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 03 00:15:09 crc kubenswrapper[4798]: E0203 00:15:09.049344 4798 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="400ms" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.123051 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.125161 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.125246 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.125266 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.125310 4798 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 03 00:15:09 crc kubenswrapper[4798]: E0203 00:15:09.126118 4798 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.150:6443: connect: connection refused" node="crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.207510 4798 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.207708 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.209457 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.209530 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.209548 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.209846 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.210968 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.211031 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.211050 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.211051 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.211129 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.211239 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.211403 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.211452 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.212314 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.212339 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.212349 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.212810 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.212836 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.212846 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.212940 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.213304 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.213342 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.213450 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.213541 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.213553 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.213853 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.213885 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.213896 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.214026 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.214198 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.214245 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.214383 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.214430 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.214448 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.214764 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.214794 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.214805 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.214957 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.214984 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.215224 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.215267 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.215278 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.215725 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.215756 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.215765 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.276144 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.276203 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.276235 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.276261 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.276284 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.276304 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.276323 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.276347 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.276370 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.276398 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.276421 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.276444 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.276466 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.276488 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.276558 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.326540 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.328098 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.328166 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.328185 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.328226 4798 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 03 00:15:09 crc kubenswrapper[4798]: E0203 00:15:09.328970 4798 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.150:6443: connect: connection refused" node="crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.378118 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.378219 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.378276 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.378308 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.378345 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.378377 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.378408 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.378440 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.378472 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.378514 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.378554 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.378592 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.378697 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.378746 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.378614 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.378798 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.378749 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.378755 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.378563 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.378695 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.378888 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.378624 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.378610 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.378938 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.379060 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.379108 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.379126 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.379147 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.379229 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.379305 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: E0203 00:15:09.451139 4798 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="800ms" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.562065 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.594492 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.611300 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: W0203 00:15:09.611954 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-0bbd9a5ad10c4af82e751b049f3866e956a932317f2f4ecc78e46aad5106a624 WatchSource:0}: Error finding container 0bbd9a5ad10c4af82e751b049f3866e956a932317f2f4ecc78e46aad5106a624: Status 404 returned error can't find the container with id 0bbd9a5ad10c4af82e751b049f3866e956a932317f2f4ecc78e46aad5106a624 Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.632994 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: W0203 00:15:09.641863 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-7b7221b100456391b5b3eb6a2abaa2999a9aae8394ab18a98afe04c7e0da69f7 WatchSource:0}: Error finding container 7b7221b100456391b5b3eb6a2abaa2999a9aae8394ab18a98afe04c7e0da69f7: Status 404 returned error can't find the container with id 7b7221b100456391b5b3eb6a2abaa2999a9aae8394ab18a98afe04c7e0da69f7 Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.645214 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 00:15:09 crc kubenswrapper[4798]: W0203 00:15:09.658484 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-ff8ff76fba09a31fb20fe27a563bab80ce8fe47a64bff6d43f322815fa9d8ebd WatchSource:0}: Error finding container ff8ff76fba09a31fb20fe27a563bab80ce8fe47a64bff6d43f322815fa9d8ebd: Status 404 returned error can't find the container with id ff8ff76fba09a31fb20fe27a563bab80ce8fe47a64bff6d43f322815fa9d8ebd Feb 03 00:15:09 crc kubenswrapper[4798]: W0203 00:15:09.668593 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-03da5d4eb3992c836008da4b961f1937b16bde2961b1fc715b8e18cdd08add78 WatchSource:0}: Error finding container 03da5d4eb3992c836008da4b961f1937b16bde2961b1fc715b8e18cdd08add78: Status 404 returned error can't find the container with id 03da5d4eb3992c836008da4b961f1937b16bde2961b1fc715b8e18cdd08add78 Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.729540 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.731306 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.731369 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.731383 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.731422 4798 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 03 00:15:09 crc kubenswrapper[4798]: E0203 00:15:09.732142 4798 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.150:6443: connect: connection refused" node="crc" Feb 03 00:15:09 crc kubenswrapper[4798]: W0203 00:15:09.824219 4798 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 03 00:15:09 crc kubenswrapper[4798]: E0203 00:15:09.824852 4798 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.840828 4798 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.841828 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 00:55:30.432217574 +0000 UTC Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.911691 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"03da5d4eb3992c836008da4b961f1937b16bde2961b1fc715b8e18cdd08add78"} Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.913115 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ff8ff76fba09a31fb20fe27a563bab80ce8fe47a64bff6d43f322815fa9d8ebd"} Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.914404 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b118f904c03bb0b5001e29eded66da88c2f8003c899b3a5b525988551b4d8beb"} Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.916330 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7b7221b100456391b5b3eb6a2abaa2999a9aae8394ab18a98afe04c7e0da69f7"} Feb 03 00:15:09 crc kubenswrapper[4798]: I0203 00:15:09.919438 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0bbd9a5ad10c4af82e751b049f3866e956a932317f2f4ecc78e46aad5106a624"} Feb 03 00:15:10 crc kubenswrapper[4798]: W0203 00:15:10.001564 4798 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 03 00:15:10 crc kubenswrapper[4798]: E0203 00:15:10.001691 4798 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 03 00:15:10 crc kubenswrapper[4798]: E0203 00:15:10.251797 4798 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="1.6s" Feb 03 00:15:10 crc kubenswrapper[4798]: W0203 00:15:10.257775 4798 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 03 00:15:10 crc kubenswrapper[4798]: E0203 00:15:10.257875 4798 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 03 00:15:10 crc kubenswrapper[4798]: W0203 00:15:10.361898 4798 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 03 00:15:10 crc kubenswrapper[4798]: E0203 00:15:10.362000 4798 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 03 00:15:10 crc kubenswrapper[4798]: I0203 00:15:10.532808 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:10 crc kubenswrapper[4798]: I0203 00:15:10.534667 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:10 crc kubenswrapper[4798]: I0203 00:15:10.534710 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:10 crc kubenswrapper[4798]: I0203 00:15:10.534719 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:10 crc kubenswrapper[4798]: I0203 00:15:10.534743 4798 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 03 00:15:10 crc kubenswrapper[4798]: E0203 00:15:10.535193 4798 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.150:6443: connect: connection refused" node="crc" Feb 03 00:15:10 crc kubenswrapper[4798]: I0203 00:15:10.755975 4798 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 03 00:15:10 crc kubenswrapper[4798]: E0203 00:15:10.757601 4798 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 03 00:15:10 crc kubenswrapper[4798]: I0203 00:15:10.841766 4798 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 03 00:15:10 crc kubenswrapper[4798]: I0203 00:15:10.841974 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 04:08:34.862926267 +0000 UTC Feb 03 00:15:10 crc kubenswrapper[4798]: I0203 00:15:10.924955 4798 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="3ece09a4762670459423ed39e563099723cdbb52f78b4edaae6f80ec5eb33624" exitCode=0 Feb 03 00:15:10 crc kubenswrapper[4798]: I0203 00:15:10.925076 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"3ece09a4762670459423ed39e563099723cdbb52f78b4edaae6f80ec5eb33624"} Feb 03 00:15:10 crc kubenswrapper[4798]: I0203 00:15:10.925157 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:10 crc kubenswrapper[4798]: I0203 00:15:10.926808 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:10 crc kubenswrapper[4798]: I0203 00:15:10.926849 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:10 crc kubenswrapper[4798]: I0203 00:15:10.926860 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:10 crc kubenswrapper[4798]: I0203 00:15:10.928341 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e21fe6f4f8a090678c9eb76141cc1896ce9c032c026e768c884350cb64dace63"} Feb 03 00:15:10 crc kubenswrapper[4798]: I0203 00:15:10.928393 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c0ee0d4bdb8f37f7bbb7a130615b9df7a612fbcc74aadfb5eeb319935d50616d"} Feb 03 00:15:10 crc kubenswrapper[4798]: I0203 00:15:10.928413 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1da94f34ed139edbb39d3e777c72192e4d6d444b191e109b4c72ccb945513a53"} Feb 03 00:15:10 crc kubenswrapper[4798]: I0203 00:15:10.930312 4798 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598" exitCode=0 Feb 03 00:15:10 crc kubenswrapper[4798]: I0203 00:15:10.930366 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598"} Feb 03 00:15:10 crc kubenswrapper[4798]: I0203 00:15:10.931274 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:10 crc kubenswrapper[4798]: I0203 00:15:10.933157 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:10 crc kubenswrapper[4798]: I0203 00:15:10.933205 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:10 crc kubenswrapper[4798]: I0203 00:15:10.933223 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:10 crc kubenswrapper[4798]: I0203 00:15:10.935572 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:10 crc kubenswrapper[4798]: I0203 00:15:10.935935 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7"} Feb 03 00:15:10 crc kubenswrapper[4798]: I0203 00:15:10.935982 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:10 crc kubenswrapper[4798]: I0203 00:15:10.936818 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:10 crc kubenswrapper[4798]: I0203 00:15:10.936852 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:10 crc kubenswrapper[4798]: I0203 00:15:10.936869 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:10 crc kubenswrapper[4798]: I0203 00:15:10.935720 4798 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7" exitCode=0 Feb 03 00:15:10 crc kubenswrapper[4798]: I0203 00:15:10.937279 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:10 crc kubenswrapper[4798]: I0203 00:15:10.937343 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:10 crc kubenswrapper[4798]: I0203 00:15:10.937357 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:10 crc kubenswrapper[4798]: I0203 00:15:10.948436 4798 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="b9a2a8d66bd6782505d78d8665d6bd9df92fe62ba8180e2ffa170b5eea6decf9" exitCode=0 Feb 03 00:15:10 crc kubenswrapper[4798]: I0203 00:15:10.948508 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"b9a2a8d66bd6782505d78d8665d6bd9df92fe62ba8180e2ffa170b5eea6decf9"} Feb 03 00:15:10 crc kubenswrapper[4798]: I0203 00:15:10.948555 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:10 crc kubenswrapper[4798]: I0203 00:15:10.949755 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:10 crc kubenswrapper[4798]: I0203 00:15:10.949811 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:10 crc kubenswrapper[4798]: I0203 00:15:10.949826 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:11 crc kubenswrapper[4798]: E0203 00:15:11.500562 4798 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.150:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1890944c441d333a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-03 00:15:08.837794618 +0000 UTC m=+0.603784639,LastTimestamp:2026-02-03 00:15:08.837794618 +0000 UTC m=+0.603784639,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 03 00:15:11 crc kubenswrapper[4798]: I0203 00:15:11.840682 4798 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 03 00:15:11 crc kubenswrapper[4798]: I0203 00:15:11.842722 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 16:30:13.219875128 +0000 UTC Feb 03 00:15:11 crc kubenswrapper[4798]: E0203 00:15:11.853692 4798 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="3.2s" Feb 03 00:15:11 crc kubenswrapper[4798]: W0203 00:15:11.899570 4798 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 03 00:15:11 crc kubenswrapper[4798]: E0203 00:15:11.899703 4798 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 03 00:15:11 crc kubenswrapper[4798]: W0203 00:15:11.952454 4798 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 03 00:15:11 crc kubenswrapper[4798]: E0203 00:15:11.952551 4798 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 03 00:15:11 crc kubenswrapper[4798]: I0203 00:15:11.954724 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"a5bbd4cca7111d793a1e728396ad4e574b2e479a546e33aa2ff8b097d8f4056d"} Feb 03 00:15:11 crc kubenswrapper[4798]: I0203 00:15:11.954800 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:11 crc kubenswrapper[4798]: I0203 00:15:11.955586 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:11 crc kubenswrapper[4798]: I0203 00:15:11.955623 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:11 crc kubenswrapper[4798]: I0203 00:15:11.955637 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:11 crc kubenswrapper[4798]: I0203 00:15:11.957129 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:11 crc kubenswrapper[4798]: I0203 00:15:11.957455 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6400ae64d5cc6c1f49d285bf4274e92e3905ce52d43e3938f6c99e583f90df21"} Feb 03 00:15:11 crc kubenswrapper[4798]: I0203 00:15:11.957484 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"90b3142e9893f22bf03ac0f2935798c4cfbfadea4911f655023634b4cf61a681"} Feb 03 00:15:11 crc kubenswrapper[4798]: I0203 00:15:11.957499 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"bebae4dacf97b2b134bfc60501e61d43918c37ae414740771908db0afe0195ba"} Feb 03 00:15:11 crc kubenswrapper[4798]: I0203 00:15:11.957909 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:11 crc kubenswrapper[4798]: I0203 00:15:11.957940 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:11 crc kubenswrapper[4798]: I0203 00:15:11.957950 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:11 crc kubenswrapper[4798]: I0203 00:15:11.960259 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e5a3d5b07a894ad5cb3895f0d7890997de76f76e5dd6e4cf0d09096420e17a2d"} Feb 03 00:15:11 crc kubenswrapper[4798]: I0203 00:15:11.960386 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:11 crc kubenswrapper[4798]: I0203 00:15:11.962045 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:11 crc kubenswrapper[4798]: I0203 00:15:11.962074 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:11 crc kubenswrapper[4798]: I0203 00:15:11.962088 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:11 crc kubenswrapper[4798]: I0203 00:15:11.966267 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"033b416e7549939712a710931ff8eeed3b360c4a52f48ca3e003d48be6a9dde5"} Feb 03 00:15:11 crc kubenswrapper[4798]: I0203 00:15:11.966305 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a58ca61528f9b3bf28c062c16d1b924e68f3a7bc60438abbb9e6d726023b713b"} Feb 03 00:15:11 crc kubenswrapper[4798]: I0203 00:15:11.966321 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d75e6cbd2a5c8ecdfeb505388b2c00877ab436a5434a9ffb2c15fd1a6cbde36b"} Feb 03 00:15:11 crc kubenswrapper[4798]: I0203 00:15:11.966332 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3a7e69941f257b4b15956eceee179fe4480d582215ae076f2757f5ead357c5a7"} Feb 03 00:15:11 crc kubenswrapper[4798]: I0203 00:15:11.969313 4798 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488" exitCode=0 Feb 03 00:15:11 crc kubenswrapper[4798]: I0203 00:15:11.969401 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:11 crc kubenswrapper[4798]: I0203 00:15:11.969406 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488"} Feb 03 00:15:11 crc kubenswrapper[4798]: I0203 00:15:11.970289 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:11 crc kubenswrapper[4798]: I0203 00:15:11.970357 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:11 crc kubenswrapper[4798]: I0203 00:15:11.970382 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:12 crc kubenswrapper[4798]: W0203 00:15:12.035515 4798 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 03 00:15:12 crc kubenswrapper[4798]: E0203 00:15:12.035590 4798 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 03 00:15:12 crc kubenswrapper[4798]: I0203 00:15:12.136186 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:12 crc kubenswrapper[4798]: I0203 00:15:12.137211 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:12 crc kubenswrapper[4798]: I0203 00:15:12.137257 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:12 crc kubenswrapper[4798]: I0203 00:15:12.137268 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:12 crc kubenswrapper[4798]: I0203 00:15:12.137293 4798 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 03 00:15:12 crc kubenswrapper[4798]: E0203 00:15:12.137826 4798 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.150:6443: connect: connection refused" node="crc" Feb 03 00:15:12 crc kubenswrapper[4798]: I0203 00:15:12.843160 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 18:18:24.826575636 +0000 UTC Feb 03 00:15:12 crc kubenswrapper[4798]: I0203 00:15:12.979566 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b848412f3f729b0be1246c7aa6d61d8a64e93fdbdaa5899e447dd24adb39f185"} Feb 03 00:15:12 crc kubenswrapper[4798]: I0203 00:15:12.979757 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:12 crc kubenswrapper[4798]: I0203 00:15:12.981203 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:12 crc kubenswrapper[4798]: I0203 00:15:12.981237 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:12 crc kubenswrapper[4798]: I0203 00:15:12.981249 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:12 crc kubenswrapper[4798]: I0203 00:15:12.983130 4798 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c" exitCode=0 Feb 03 00:15:12 crc kubenswrapper[4798]: I0203 00:15:12.983312 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:12 crc kubenswrapper[4798]: I0203 00:15:12.983540 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c"} Feb 03 00:15:12 crc kubenswrapper[4798]: I0203 00:15:12.983602 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 00:15:12 crc kubenswrapper[4798]: I0203 00:15:12.983728 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:12 crc kubenswrapper[4798]: I0203 00:15:12.984041 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:12 crc kubenswrapper[4798]: I0203 00:15:12.984489 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:12 crc kubenswrapper[4798]: I0203 00:15:12.984901 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:12 crc kubenswrapper[4798]: I0203 00:15:12.984939 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:12 crc kubenswrapper[4798]: I0203 00:15:12.984953 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:12 crc kubenswrapper[4798]: I0203 00:15:12.984973 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:12 crc kubenswrapper[4798]: I0203 00:15:12.985013 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:12 crc kubenswrapper[4798]: I0203 00:15:12.985036 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:12 crc kubenswrapper[4798]: I0203 00:15:12.985889 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:12 crc kubenswrapper[4798]: I0203 00:15:12.986045 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:12 crc kubenswrapper[4798]: I0203 00:15:12.986079 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:12 crc kubenswrapper[4798]: I0203 00:15:12.986091 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:12 crc kubenswrapper[4798]: I0203 00:15:12.986285 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:12 crc kubenswrapper[4798]: I0203 00:15:12.986420 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:13 crc kubenswrapper[4798]: I0203 00:15:13.327613 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 00:15:13 crc kubenswrapper[4798]: I0203 00:15:13.843728 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 19:01:33.194852139 +0000 UTC Feb 03 00:15:13 crc kubenswrapper[4798]: I0203 00:15:13.990960 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a6b5aaaf6e417770868b0d07795a8dce7a762fff69be8f67fa8b18f2d725b3c3"} Feb 03 00:15:13 crc kubenswrapper[4798]: I0203 00:15:13.991044 4798 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 00:15:13 crc kubenswrapper[4798]: I0203 00:15:13.991069 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6167533a6a616c137b890a0878d4ca54de45315ddf8799777a78b065f287f8f7"} Feb 03 00:15:13 crc kubenswrapper[4798]: I0203 00:15:13.991088 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f8d5985e1d80e9ba103ce9794b795647710c64f0bdf9f6b6c7ca7e3b1cc4bd8a"} Feb 03 00:15:13 crc kubenswrapper[4798]: I0203 00:15:13.991137 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:13 crc kubenswrapper[4798]: I0203 00:15:13.991218 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:13 crc kubenswrapper[4798]: I0203 00:15:13.992519 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:13 crc kubenswrapper[4798]: I0203 00:15:13.992560 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:13 crc kubenswrapper[4798]: I0203 00:15:13.992575 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:13 crc kubenswrapper[4798]: I0203 00:15:13.993176 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:13 crc kubenswrapper[4798]: I0203 00:15:13.993224 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:13 crc kubenswrapper[4798]: I0203 00:15:13.993257 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:14 crc kubenswrapper[4798]: I0203 00:15:14.844124 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 09:45:17.624772245 +0000 UTC Feb 03 00:15:14 crc kubenswrapper[4798]: I0203 00:15:14.962350 4798 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 03 00:15:14 crc kubenswrapper[4798]: I0203 00:15:14.997064 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"174155cc26bc82aa227742b9171a1bdc7e162fc52f673214b383d6fe29d6171e"} Feb 03 00:15:14 crc kubenswrapper[4798]: I0203 00:15:14.997107 4798 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 00:15:14 crc kubenswrapper[4798]: I0203 00:15:14.997152 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:14 crc kubenswrapper[4798]: I0203 00:15:14.997177 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:14 crc kubenswrapper[4798]: I0203 00:15:14.997109 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e78de22de18e933207990153b342930848b80ccb2a9f4111a17e195a988d4b38"} Feb 03 00:15:15 crc kubenswrapper[4798]: I0203 00:15:15.002010 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:15 crc kubenswrapper[4798]: I0203 00:15:15.002060 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:15 crc kubenswrapper[4798]: I0203 00:15:15.002076 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:15 crc kubenswrapper[4798]: I0203 00:15:15.002097 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:15 crc kubenswrapper[4798]: I0203 00:15:15.002120 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:15 crc kubenswrapper[4798]: I0203 00:15:15.002133 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:15 crc kubenswrapper[4798]: I0203 00:15:15.325709 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 00:15:15 crc kubenswrapper[4798]: I0203 00:15:15.338326 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:15 crc kubenswrapper[4798]: I0203 00:15:15.339844 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:15 crc kubenswrapper[4798]: I0203 00:15:15.339882 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:15 crc kubenswrapper[4798]: I0203 00:15:15.339895 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:15 crc kubenswrapper[4798]: I0203 00:15:15.339920 4798 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 03 00:15:15 crc kubenswrapper[4798]: I0203 00:15:15.845302 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 23:32:08.208243185 +0000 UTC Feb 03 00:15:16 crc kubenswrapper[4798]: I0203 00:15:16.000897 4798 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 00:15:16 crc kubenswrapper[4798]: I0203 00:15:16.000979 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:16 crc kubenswrapper[4798]: I0203 00:15:16.001086 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:16 crc kubenswrapper[4798]: I0203 00:15:16.003034 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:16 crc kubenswrapper[4798]: I0203 00:15:16.003069 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:16 crc kubenswrapper[4798]: I0203 00:15:16.003118 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:16 crc kubenswrapper[4798]: I0203 00:15:16.003076 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:16 crc kubenswrapper[4798]: I0203 00:15:16.003139 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:16 crc kubenswrapper[4798]: I0203 00:15:16.003213 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:16 crc kubenswrapper[4798]: I0203 00:15:16.846538 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 15:48:31.335143118 +0000 UTC Feb 03 00:15:16 crc kubenswrapper[4798]: I0203 00:15:16.897234 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 00:15:17 crc kubenswrapper[4798]: I0203 00:15:17.006165 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:17 crc kubenswrapper[4798]: I0203 00:15:17.007623 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:17 crc kubenswrapper[4798]: I0203 00:15:17.007788 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:17 crc kubenswrapper[4798]: I0203 00:15:17.007849 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:17 crc kubenswrapper[4798]: I0203 00:15:17.709564 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 03 00:15:17 crc kubenswrapper[4798]: I0203 00:15:17.709808 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:17 crc kubenswrapper[4798]: I0203 00:15:17.711219 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:17 crc kubenswrapper[4798]: I0203 00:15:17.711278 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:17 crc kubenswrapper[4798]: I0203 00:15:17.711287 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:17 crc kubenswrapper[4798]: I0203 00:15:17.773517 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 00:15:17 crc kubenswrapper[4798]: I0203 00:15:17.773844 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:17 crc kubenswrapper[4798]: I0203 00:15:17.775417 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:17 crc kubenswrapper[4798]: I0203 00:15:17.775471 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:17 crc kubenswrapper[4798]: I0203 00:15:17.775491 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:17 crc kubenswrapper[4798]: I0203 00:15:17.847092 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 22:00:54.030507738 +0000 UTC Feb 03 00:15:18 crc kubenswrapper[4798]: I0203 00:15:18.372451 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 00:15:18 crc kubenswrapper[4798]: I0203 00:15:18.372649 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:18 crc kubenswrapper[4798]: I0203 00:15:18.374010 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:18 crc kubenswrapper[4798]: I0203 00:15:18.374088 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:18 crc kubenswrapper[4798]: I0203 00:15:18.374143 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:18 crc kubenswrapper[4798]: I0203 00:15:18.847947 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 02:03:21.703828428 +0000 UTC Feb 03 00:15:19 crc kubenswrapper[4798]: E0203 00:15:19.030390 4798 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 03 00:15:19 crc kubenswrapper[4798]: I0203 00:15:19.194080 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 00:15:19 crc kubenswrapper[4798]: I0203 00:15:19.194830 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:19 crc kubenswrapper[4798]: I0203 00:15:19.196393 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:19 crc kubenswrapper[4798]: I0203 00:15:19.196452 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:19 crc kubenswrapper[4798]: I0203 00:15:19.196467 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:19 crc kubenswrapper[4798]: I0203 00:15:19.199871 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 00:15:19 crc kubenswrapper[4798]: I0203 00:15:19.697045 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 03 00:15:19 crc kubenswrapper[4798]: I0203 00:15:19.697302 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:19 crc kubenswrapper[4798]: I0203 00:15:19.698586 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:19 crc kubenswrapper[4798]: I0203 00:15:19.698689 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:19 crc kubenswrapper[4798]: I0203 00:15:19.698725 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:19 crc kubenswrapper[4798]: I0203 00:15:19.849124 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 18:00:51.369802447 +0000 UTC Feb 03 00:15:20 crc kubenswrapper[4798]: I0203 00:15:20.015556 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:20 crc kubenswrapper[4798]: I0203 00:15:20.015903 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 00:15:20 crc kubenswrapper[4798]: I0203 00:15:20.016926 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:20 crc kubenswrapper[4798]: I0203 00:15:20.016993 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:20 crc kubenswrapper[4798]: I0203 00:15:20.017013 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:20 crc kubenswrapper[4798]: I0203 00:15:20.021297 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 00:15:20 crc kubenswrapper[4798]: I0203 00:15:20.850113 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 05:37:54.13319251 +0000 UTC Feb 03 00:15:21 crc kubenswrapper[4798]: I0203 00:15:21.017986 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:21 crc kubenswrapper[4798]: I0203 00:15:21.019270 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:21 crc kubenswrapper[4798]: I0203 00:15:21.019324 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:21 crc kubenswrapper[4798]: I0203 00:15:21.019343 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:21 crc kubenswrapper[4798]: I0203 00:15:21.372871 4798 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 00:15:21 crc kubenswrapper[4798]: I0203 00:15:21.372959 4798 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 03 00:15:21 crc kubenswrapper[4798]: I0203 00:15:21.850930 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 00:20:13.226320283 +0000 UTC Feb 03 00:15:22 crc kubenswrapper[4798]: I0203 00:15:22.020562 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:22 crc kubenswrapper[4798]: I0203 00:15:22.022078 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:22 crc kubenswrapper[4798]: I0203 00:15:22.022148 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:22 crc kubenswrapper[4798]: I0203 00:15:22.022169 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:22 crc kubenswrapper[4798]: W0203 00:15:22.747397 4798 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 03 00:15:22 crc kubenswrapper[4798]: I0203 00:15:22.747509 4798 trace.go:236] Trace[280598454]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Feb-2026 00:15:12.745) (total time: 10001ms): Feb 03 00:15:22 crc kubenswrapper[4798]: Trace[280598454]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:15:22.747) Feb 03 00:15:22 crc kubenswrapper[4798]: Trace[280598454]: [10.001631618s] [10.001631618s] END Feb 03 00:15:22 crc kubenswrapper[4798]: E0203 00:15:22.747540 4798 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 03 00:15:22 crc kubenswrapper[4798]: I0203 00:15:22.844145 4798 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 03 00:15:22 crc kubenswrapper[4798]: I0203 00:15:22.851241 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 09:39:32.343562408 +0000 UTC Feb 03 00:15:22 crc kubenswrapper[4798]: I0203 00:15:22.971062 4798 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 03 00:15:22 crc kubenswrapper[4798]: I0203 00:15:22.971147 4798 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 03 00:15:22 crc kubenswrapper[4798]: I0203 00:15:22.982889 4798 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 03 00:15:22 crc kubenswrapper[4798]: I0203 00:15:22.982961 4798 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 03 00:15:23 crc kubenswrapper[4798]: I0203 00:15:23.851927 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 11:16:57.726473418 +0000 UTC Feb 03 00:15:24 crc kubenswrapper[4798]: I0203 00:15:24.853091 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 18:20:47.086198799 +0000 UTC Feb 03 00:15:25 crc kubenswrapper[4798]: I0203 00:15:25.334370 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 00:15:25 crc kubenswrapper[4798]: I0203 00:15:25.334586 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:25 crc kubenswrapper[4798]: I0203 00:15:25.336275 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:25 crc kubenswrapper[4798]: I0203 00:15:25.336350 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:25 crc kubenswrapper[4798]: I0203 00:15:25.336372 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:25 crc kubenswrapper[4798]: I0203 00:15:25.339686 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 00:15:25 crc kubenswrapper[4798]: I0203 00:15:25.853639 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 08:26:57.411939298 +0000 UTC Feb 03 00:15:25 crc kubenswrapper[4798]: I0203 00:15:25.959023 4798 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 03 00:15:26 crc kubenswrapper[4798]: I0203 00:15:26.830340 4798 apiserver.go:52] "Watching apiserver" Feb 03 00:15:26 crc kubenswrapper[4798]: I0203 00:15:26.838257 4798 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 03 00:15:26 crc kubenswrapper[4798]: I0203 00:15:26.838605 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Feb 03 00:15:26 crc kubenswrapper[4798]: I0203 00:15:26.839013 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 00:15:26 crc kubenswrapper[4798]: I0203 00:15:26.839093 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:15:26 crc kubenswrapper[4798]: I0203 00:15:26.839147 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:15:26 crc kubenswrapper[4798]: E0203 00:15:26.839252 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:15:26 crc kubenswrapper[4798]: I0203 00:15:26.839284 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 00:15:26 crc kubenswrapper[4798]: E0203 00:15:26.839344 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:15:26 crc kubenswrapper[4798]: I0203 00:15:26.839699 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 00:15:26 crc kubenswrapper[4798]: I0203 00:15:26.840091 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:15:26 crc kubenswrapper[4798]: E0203 00:15:26.840145 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:15:26 crc kubenswrapper[4798]: I0203 00:15:26.842448 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 03 00:15:26 crc kubenswrapper[4798]: I0203 00:15:26.842834 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 03 00:15:26 crc kubenswrapper[4798]: I0203 00:15:26.843164 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 03 00:15:26 crc kubenswrapper[4798]: I0203 00:15:26.843185 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 03 00:15:26 crc kubenswrapper[4798]: I0203 00:15:26.843281 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 03 00:15:26 crc kubenswrapper[4798]: I0203 00:15:26.843517 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 03 00:15:26 crc kubenswrapper[4798]: I0203 00:15:26.844026 4798 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 03 00:15:26 crc kubenswrapper[4798]: I0203 00:15:26.845196 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 03 00:15:26 crc kubenswrapper[4798]: I0203 00:15:26.845232 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 03 00:15:26 crc kubenswrapper[4798]: I0203 00:15:26.845251 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 03 00:15:26 crc kubenswrapper[4798]: I0203 00:15:26.853876 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 10:42:59.228382554 +0000 UTC Feb 03 00:15:26 crc kubenswrapper[4798]: I0203 00:15:26.883055 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 00:15:26 crc kubenswrapper[4798]: I0203 00:15:26.899979 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 00:15:26 crc kubenswrapper[4798]: I0203 00:15:26.912938 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 00:15:26 crc kubenswrapper[4798]: I0203 00:15:26.923171 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 00:15:26 crc kubenswrapper[4798]: I0203 00:15:26.935886 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 00:15:26 crc kubenswrapper[4798]: I0203 00:15:26.950406 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 00:15:26 crc kubenswrapper[4798]: I0203 00:15:26.964008 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84295e9a-e49f-4550-8b66-c13c1d9a31ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e69941f257b4b15956eceee179fe4480d582215ae076f2757f5ead357c5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a58ca61528f9b3bf28c062c16d1b924e68f3a7bc60438abbb9e6d726023b713b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75e6cbd2a5c8ecdfeb505388b2c00877ab436a5434a9ffb2c15fd1a6cbde36b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b848412f3f729b0be1246c7aa6d61d8a64e93fdbdaa5899e447dd24adb39f185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b416e7549939712a710931ff8eeed3b360c4a52f48ca3e003d48be6a9dde5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 00:15:26 crc kubenswrapper[4798]: I0203 00:15:26.980334 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 00:15:27 crc kubenswrapper[4798]: I0203 00:15:27.032333 4798 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 00:15:27 crc kubenswrapper[4798]: I0203 00:15:27.738731 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 03 00:15:27 crc kubenswrapper[4798]: I0203 00:15:27.754400 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 00:15:27 crc kubenswrapper[4798]: I0203 00:15:27.757106 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 03 00:15:27 crc kubenswrapper[4798]: I0203 00:15:27.758399 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 03 00:15:27 crc kubenswrapper[4798]: I0203 00:15:27.771922 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84295e9a-e49f-4550-8b66-c13c1d9a31ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e69941f257b4b15956eceee179fe4480d582215ae076f2757f5ead357c5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a58ca61528f9b3bf28c062c16d1b924e68f3a7bc60438abbb9e6d726023b713b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75e6cbd2a5c8ecdfeb505388b2c00877ab436a5434a9ffb2c15fd1a6cbde36b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b848412f3f729b0be1246c7aa6d61d8a64e93fdbdaa5899e447dd24adb39f185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b416e7549939712a710931ff8eeed3b360c4a52f48ca3e003d48be6a9dde5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 00:15:27 crc kubenswrapper[4798]: I0203 00:15:27.782951 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 00:15:27 crc kubenswrapper[4798]: I0203 00:15:27.793243 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 00:15:27 crc kubenswrapper[4798]: I0203 00:15:27.805613 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 00:15:27 crc kubenswrapper[4798]: I0203 00:15:27.815790 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 00:15:27 crc kubenswrapper[4798]: I0203 00:15:27.825996 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 00:15:27 crc kubenswrapper[4798]: I0203 00:15:27.834023 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 00:15:27 crc kubenswrapper[4798]: I0203 00:15:27.842011 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 00:15:27 crc kubenswrapper[4798]: I0203 00:15:27.852042 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 00:15:27 crc kubenswrapper[4798]: I0203 00:15:27.854295 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 04:18:15.615521387 +0000 UTC Feb 03 00:15:27 crc kubenswrapper[4798]: I0203 00:15:27.882943 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 00:15:27 crc kubenswrapper[4798]: I0203 00:15:27.892765 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 00:15:27 crc kubenswrapper[4798]: I0203 00:15:27.907540 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:15:27 crc kubenswrapper[4798]: I0203 00:15:27.907616 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:15:27 crc kubenswrapper[4798]: E0203 00:15:27.908123 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:15:27 crc kubenswrapper[4798]: E0203 00:15:27.907996 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:15:27 crc kubenswrapper[4798]: I0203 00:15:27.907603 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84295e9a-e49f-4550-8b66-c13c1d9a31ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e69941f257b4b15956eceee179fe4480d582215ae076f2757f5ead357c5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a58ca61528f9b3bf28c062c16d1b924e68f3a7bc60438abbb9e6d726023b713b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75e6cbd2a5c8ecdfeb505388b2c00877ab436a5434a9ffb2c15fd1a6cbde36b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b848412f3f729b0be1246c7aa6d61d8a64e93fdbdaa5899e447dd24adb39f185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b416e7549939712a710931ff8eeed3b360c4a52f48ca3e003d48be6a9dde5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 00:15:27 crc kubenswrapper[4798]: I0203 00:15:27.921232 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 00:15:27 crc kubenswrapper[4798]: I0203 00:15:27.938952 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d9827e9-9254-47d3-8104-063cecf114b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6167533a6a616c137b890a0878d4ca54de45315ddf8799777a78b065f287f8f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5aaaf6e417770868b0d07795a8dce7a762fff69be8f67fa8b18f2d725b3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://174155cc26bc82aa227742b9171a1bdc7e162fc52f673214b383d6fe29d6171e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78de22de18e933207990153b342930848b80ccb2a9f4111a17e195a988d4b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5985e1d80e9ba103ce9794b795647710c64f0bdf9f6b6c7ca7e3b1cc4bd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 00:15:27 crc kubenswrapper[4798]: E0203 00:15:27.971785 4798 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 03 00:15:27 crc kubenswrapper[4798]: I0203 00:15:27.973464 4798 trace.go:236] Trace[499883835]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Feb-2026 00:15:17.789) (total time: 10183ms): Feb 03 00:15:27 crc kubenswrapper[4798]: Trace[499883835]: ---"Objects listed" error: 10183ms (00:15:27.973) Feb 03 00:15:27 crc kubenswrapper[4798]: Trace[499883835]: [10.183804554s] [10.183804554s] END Feb 03 00:15:27 crc kubenswrapper[4798]: I0203 00:15:27.973485 4798 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 03 00:15:27 crc kubenswrapper[4798]: I0203 00:15:27.974467 4798 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 03 00:15:27 crc kubenswrapper[4798]: I0203 00:15:27.975468 4798 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 03 00:15:27 crc kubenswrapper[4798]: I0203 00:15:27.976098 4798 trace.go:236] Trace[1142843814]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Feb-2026 00:15:15.560) (total time: 12415ms): Feb 03 00:15:27 crc kubenswrapper[4798]: Trace[1142843814]: ---"Objects listed" error: 12415ms (00:15:27.975) Feb 03 00:15:27 crc kubenswrapper[4798]: Trace[1142843814]: [12.415248965s] [12.415248965s] END Feb 03 00:15:27 crc kubenswrapper[4798]: I0203 00:15:27.976126 4798 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 03 00:15:27 crc kubenswrapper[4798]: E0203 00:15:27.976629 4798 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.004329 4798 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.021033 4798 csr.go:261] certificate signing request csr-h59pz is approved, waiting to be issued Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.024388 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.031318 4798 csr.go:257] certificate signing request csr-h59pz is issued Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.076308 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.076345 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.076369 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.076386 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.076403 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.076427 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.076443 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.076458 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.076478 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.076493 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.076511 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.076533 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.076547 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.076561 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.076575 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.076594 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.076616 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.076634 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.076662 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.076692 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.076708 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.076726 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.076741 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.076758 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.076774 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.076790 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.076806 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.076829 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.076858 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.076875 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.076900 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.076914 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.076930 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.076946 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.076962 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.076988 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077008 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077026 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077046 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077065 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077084 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077102 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077118 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077133 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077148 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077163 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077182 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077197 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077211 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077225 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077239 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077290 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077314 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077334 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077355 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077371 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077385 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077400 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077416 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077431 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077446 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077461 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077477 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077492 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077508 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077527 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077547 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077570 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077593 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077609 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077624 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077637 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077669 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077687 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077701 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077716 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077731 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077746 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077761 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077776 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077791 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077807 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077805 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077824 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077839 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077856 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077870 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077887 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077909 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077930 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077946 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077962 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077978 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077993 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078010 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078026 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078040 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078055 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078071 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078086 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078101 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078115 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078130 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078146 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078161 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078243 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078260 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078276 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078294 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078311 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078327 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078344 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078365 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078387 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078410 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078431 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078452 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078475 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078492 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078508 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078523 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078538 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078554 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078570 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078585 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078600 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078615 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078631 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078646 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078685 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078702 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078717 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078732 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078747 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078761 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078777 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078792 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078806 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078824 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078839 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078857 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078873 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078889 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078905 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078921 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078937 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078953 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078969 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078985 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079000 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079017 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079038 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079054 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079070 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079086 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079101 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079119 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079134 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079150 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079166 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079182 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079197 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079213 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079230 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079245 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079261 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079277 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079318 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079338 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079359 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079379 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079399 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079420 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079440 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079459 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079481 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079501 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079518 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079534 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079551 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079567 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079584 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079600 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079617 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079637 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079692 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079712 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079733 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079750 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079766 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079782 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079799 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079818 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079834 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079850 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079866 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.082706 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.082748 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.082771 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.082889 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.083084 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.083117 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.083341 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.083707 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.077332 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d9827e9-9254-47d3-8104-063cecf114b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6167533a6a616c137b890a0878d4ca54de45315ddf8799777a78b065f287f8f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5aaaf6e417770868b0d07795a8dce7a762fff69be8f67fa8b18f2d725b3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://174155cc26bc82aa227742b9171a1bdc7e162fc52f673214b383d6fe29d6171e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78de22de18e933207990153b342930848b80ccb2a9f4111a17e195a988d4b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5985e1d80e9ba103ce9794b795647710c64f0bdf9f6b6c7ca7e3b1cc4bd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078006 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078171 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078348 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078496 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.078647 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079562 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.079751 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.082596 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.083004 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.083118 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.083156 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.083424 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.084402 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.086705 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.086860 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.084279 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.087237 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.090352 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.090700 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.090929 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.093314 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.093389 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.093424 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.093456 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.093509 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.093516 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.093607 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.093836 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.093844 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.093900 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.094066 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.094088 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.094119 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.094140 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.094288 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.094303 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.094324 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.094389 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.094503 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.094677 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.094691 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.094809 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.094791 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.095280 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.086863 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.095543 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.095597 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.095700 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.096164 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.096376 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.096586 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.096696 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.096700 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.096860 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.096877 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.096996 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.097104 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.097125 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.097247 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.097409 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.097480 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.097527 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.097601 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.097704 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.097746 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.097991 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.098108 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.098672 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.099491 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.099542 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.099705 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.100225 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.100834 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.100907 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.101163 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.101394 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.101662 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.104363 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.104766 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.104853 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.105023 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.105031 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.105441 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.105556 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.105786 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.105836 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.105874 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.105909 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 00:15:28 crc kubenswrapper[4798]: E0203 00:15:28.105915 4798 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.106052 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.106196 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.106202 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: E0203 00:15:28.106320 4798 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.106538 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.106508 4798 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.106792 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.106903 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.106918 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.107159 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.107163 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.107357 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.107671 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.107684 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.107778 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.108211 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: E0203 00:15:28.108316 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 00:15:28.60826442 +0000 UTC m=+20.374254431 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 00:15:28 crc kubenswrapper[4798]: E0203 00:15:28.108372 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 00:15:28.608365032 +0000 UTC m=+20.374355043 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.108459 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.108593 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.108976 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.109181 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: E0203 00:15:28.109688 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:15:28.609677623 +0000 UTC m=+20.375667634 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.109712 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.109731 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.109783 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.109812 4798 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.109831 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.109827 4798 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.109864 4798 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.109881 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.109897 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.109928 4798 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.109943 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.109956 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.109971 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.110038 4798 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.110055 4798 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.110069 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.110083 4798 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.110087 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.110094 4798 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.110126 4798 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.110138 4798 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.110148 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.110157 4798 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.110310 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.110424 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.110518 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.110587 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.110836 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.110959 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.110979 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.110973 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.111029 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.111097 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.111521 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.111877 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.112032 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.112068 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.112095 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.112270 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.112372 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.112527 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.112708 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.113017 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.113044 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.113131 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.113362 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.113552 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.113769 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.113828 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.113985 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.114274 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.114320 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.114590 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.114605 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.114672 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.114866 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.114903 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.115161 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.115479 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.115727 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.115880 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.116126 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.116093 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.116278 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.116351 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.116640 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.117350 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.117564 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.118046 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.119023 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.119118 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.119190 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.121318 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.121359 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.121795 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.123109 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.123240 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.123678 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.124043 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.124134 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.124264 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.124275 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.124432 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.124846 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.125055 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.125403 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.125510 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.125718 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.127794 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: E0203 00:15:28.128672 4798 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 00:15:28 crc kubenswrapper[4798]: E0203 00:15:28.128737 4798 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 00:15:28 crc kubenswrapper[4798]: E0203 00:15:28.128762 4798 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.128829 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: E0203 00:15:28.128869 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 00:15:28.628834902 +0000 UTC m=+20.394825113 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.129737 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.129877 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.130038 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.130522 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.131108 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: E0203 00:15:28.131948 4798 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 00:15:28 crc kubenswrapper[4798]: E0203 00:15:28.131986 4798 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 00:15:28 crc kubenswrapper[4798]: E0203 00:15:28.132000 4798 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.132050 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: E0203 00:15:28.132061 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 00:15:28.632042137 +0000 UTC m=+20.398032148 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.132155 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.132265 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.132245 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84295e9a-e49f-4550-8b66-c13c1d9a31ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e69941f257b4b15956eceee179fe4480d582215ae076f2757f5ead357c5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a58ca61528f9b3bf28c062c16d1b924e68f3a7bc60438abbb9e6d726023b713b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75e6cbd2a5c8ecdfeb505388b2c00877ab436a5434a9ffb2c15fd1a6cbde36b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b848412f3f729b0be1246c7aa6d61d8a64e93fdbdaa5899e447dd24adb39f185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b416e7549939712a710931ff8eeed3b360c4a52f48ca3e003d48be6a9dde5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.132460 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.133435 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.133692 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.134178 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.134511 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.134595 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.134731 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.134996 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.135127 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.135480 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.136052 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.138522 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.139988 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.140173 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.140840 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.141896 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.144165 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.146699 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.146715 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.151428 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.154678 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.159786 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.165819 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.168298 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.172243 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.178432 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.183693 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.200506 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.212508 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.212569 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.212641 4798 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.212676 4798 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.212692 4798 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.212705 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.212717 4798 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.212720 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.212729 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.212766 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.212777 4798 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.212786 4798 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.212794 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.212803 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.212811 4798 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.212820 4798 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.212828 4798 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.212837 4798 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.212844 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.212853 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.212862 4798 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.212777 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.212869 4798 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.212953 4798 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.212969 4798 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.212983 4798 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.212996 4798 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213008 4798 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213026 4798 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213037 4798 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213050 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213063 4798 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213074 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213087 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213099 4798 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213128 4798 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213140 4798 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213153 4798 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213167 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213179 4798 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213191 4798 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213204 4798 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213216 4798 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213227 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213239 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213250 4798 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213261 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213272 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213283 4798 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213296 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213307 4798 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213319 4798 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213334 4798 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213347 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213360 4798 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213383 4798 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213394 4798 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213406 4798 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213417 4798 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213427 4798 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213438 4798 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213448 4798 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213459 4798 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213470 4798 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213484 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213495 4798 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213505 4798 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213516 4798 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213526 4798 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213536 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213548 4798 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213558 4798 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213569 4798 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213579 4798 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213591 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213602 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213612 4798 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213624 4798 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213634 4798 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213645 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213672 4798 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213683 4798 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213695 4798 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213708 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213720 4798 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213732 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213746 4798 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213757 4798 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213769 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213780 4798 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213793 4798 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213805 4798 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213818 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213839 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213850 4798 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213862 4798 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213873 4798 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213884 4798 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213895 4798 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213905 4798 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213916 4798 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213928 4798 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213959 4798 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213970 4798 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213981 4798 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.213992 4798 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214003 4798 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214014 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214025 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214038 4798 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214049 4798 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214060 4798 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214072 4798 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214084 4798 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214095 4798 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214108 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214121 4798 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214131 4798 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214143 4798 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214156 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214168 4798 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214179 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214191 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214203 4798 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214214 4798 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214234 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214246 4798 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214258 4798 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214269 4798 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214281 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214293 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214305 4798 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214316 4798 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214327 4798 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214338 4798 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214349 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214361 4798 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214371 4798 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214382 4798 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214393 4798 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214403 4798 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214414 4798 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214425 4798 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214435 4798 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214447 4798 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214458 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214470 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214484 4798 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214495 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214507 4798 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214518 4798 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214529 4798 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214540 4798 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214550 4798 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214561 4798 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214573 4798 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214584 4798 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214595 4798 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214607 4798 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214618 4798 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214630 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214641 4798 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214668 4798 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214680 4798 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214691 4798 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214701 4798 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214714 4798 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214725 4798 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214737 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214748 4798 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214760 4798 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214770 4798 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214782 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214793 4798 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214804 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214816 4798 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214828 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.214839 4798 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.218023 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.356115 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.366950 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 00:15:28 crc kubenswrapper[4798]: W0203 00:15:28.369934 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-3d26786b3d3dc11b2f164ceca08e6b934e5c8d4ae28f7dc3678c5a53ae4599bd WatchSource:0}: Error finding container 3d26786b3d3dc11b2f164ceca08e6b934e5c8d4ae28f7dc3678c5a53ae4599bd: Status 404 returned error can't find the container with id 3d26786b3d3dc11b2f164ceca08e6b934e5c8d4ae28f7dc3678c5a53ae4599bd Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.374313 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 00:15:28 crc kubenswrapper[4798]: W0203 00:15:28.376018 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-38a97ac637d8072863538dae443154359d19fc2f9c1c26805ae3e002a1aa0eaa WatchSource:0}: Error finding container 38a97ac637d8072863538dae443154359d19fc2f9c1c26805ae3e002a1aa0eaa: Status 404 returned error can't find the container with id 38a97ac637d8072863538dae443154359d19fc2f9c1c26805ae3e002a1aa0eaa Feb 03 00:15:28 crc kubenswrapper[4798]: W0203 00:15:28.390404 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-a0b3affad9afdd8c248de936cc72a3858c5ca6e87b4729fcdfb93605821ad4fe WatchSource:0}: Error finding container a0b3affad9afdd8c248de936cc72a3858c5ca6e87b4729fcdfb93605821ad4fe: Status 404 returned error can't find the container with id a0b3affad9afdd8c248de936cc72a3858c5ca6e87b4729fcdfb93605821ad4fe Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.618688 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:15:28 crc kubenswrapper[4798]: E0203 00:15:28.618965 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:15:29.61894327 +0000 UTC m=+21.384933291 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.619159 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.619202 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:15:28 crc kubenswrapper[4798]: E0203 00:15:28.619254 4798 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 00:15:28 crc kubenswrapper[4798]: E0203 00:15:28.619278 4798 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 00:15:28 crc kubenswrapper[4798]: E0203 00:15:28.619290 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 00:15:29.619279358 +0000 UTC m=+21.385269369 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 00:15:28 crc kubenswrapper[4798]: E0203 00:15:28.619336 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 00:15:29.619324339 +0000 UTC m=+21.385314350 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.682754 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.686624 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.692862 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.696264 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.698365 4798 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 03 00:15:28 crc kubenswrapper[4798]: W0203 00:15:28.698598 4798 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 03 00:15:28 crc kubenswrapper[4798]: W0203 00:15:28.698706 4798 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 03 00:15:28 crc kubenswrapper[4798]: W0203 00:15:28.698831 4798 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 03 00:15:28 crc kubenswrapper[4798]: E0203 00:15:28.699467 4798 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": read tcp 38.102.83.150:56258->38.102.83.150:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1890944c75cb0da8 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-03 00:15:09.671271848 +0000 UTC m=+1.437261909,LastTimestamp:2026-02-03 00:15:09.671271848 +0000 UTC m=+1.437261909,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.719684 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.719772 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:15:28 crc kubenswrapper[4798]: E0203 00:15:28.719904 4798 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 00:15:28 crc kubenswrapper[4798]: E0203 00:15:28.719935 4798 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 00:15:28 crc kubenswrapper[4798]: E0203 00:15:28.719949 4798 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 00:15:28 crc kubenswrapper[4798]: E0203 00:15:28.720006 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 00:15:29.719988407 +0000 UTC m=+21.485978418 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 00:15:28 crc kubenswrapper[4798]: E0203 00:15:28.719904 4798 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 00:15:28 crc kubenswrapper[4798]: E0203 00:15:28.720085 4798 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 00:15:28 crc kubenswrapper[4798]: E0203 00:15:28.720114 4798 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 00:15:28 crc kubenswrapper[4798]: E0203 00:15:28.720202 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 00:15:29.720176751 +0000 UTC m=+21.486166792 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.855798 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 14:11:36.339613365 +0000 UTC Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.907540 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:15:28 crc kubenswrapper[4798]: E0203 00:15:28.907748 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.912119 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.912834 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.914504 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.915325 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.916608 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.917373 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.918156 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.919381 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.920201 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.921437 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.922161 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.923596 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.924246 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.924943 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.926282 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.926994 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.928980 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.929980 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.930940 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.932843 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.934089 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.935320 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.936229 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.937619 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.938470 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.939855 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.941202 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.942298 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.944356 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.945156 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.945846 4798 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.945989 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.948932 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.949531 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.950586 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.952505 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.953327 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.954426 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.955202 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.956424 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.957001 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.957775 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.958984 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.960129 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.960817 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.961934 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.962585 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.963900 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.965017 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.966057 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.966598 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.967215 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.968306 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 03 00:15:28 crc kubenswrapper[4798]: I0203 00:15:28.968922 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.032391 4798 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-03 00:10:28 +0000 UTC, rotation deadline is 2026-12-03 16:58:24.244522546 +0000 UTC Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.032459 4798 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7288h42m55.212066651s for next certificate rotation Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.038326 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"da2defc8ae8b05759890f95063d7f6820ae261639188fbf19290f98cfcc080de"} Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.038390 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b5cc605f6582cf6832be9a756695825774f5ce389556957d9769286d0e3640f3"} Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.038408 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a0b3affad9afdd8c248de936cc72a3858c5ca6e87b4729fcdfb93605821ad4fe"} Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.039514 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"38a97ac637d8072863538dae443154359d19fc2f9c1c26805ae3e002a1aa0eaa"} Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.041117 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"04bb16c3599d6149d6981f97d083ee866ab1f157a08b1ba9825442920b5af8f6"} Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.041195 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"3d26786b3d3dc11b2f164ceca08e6b934e5c8d4ae28f7dc3678c5a53ae4599bd"} Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.044535 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-t8mqs"] Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.044792 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-t8mqs" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.050587 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.051048 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-4nx5v"] Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.051406 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-4nx5v" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.054498 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.054693 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.055223 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.055226 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.055604 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.058465 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.122976 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a80722df-c977-49e0-b1ec-a83fea1c4f0b-serviceca\") pod \"node-ca-t8mqs\" (UID: \"a80722df-c977-49e0-b1ec-a83fea1c4f0b\") " pod="openshift-image-registry/node-ca-t8mqs" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.123036 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f968882a-8e40-4760-b1fe-2d456390d30c-hosts-file\") pod \"node-resolver-4nx5v\" (UID: \"f968882a-8e40-4760-b1fe-2d456390d30c\") " pod="openshift-dns/node-resolver-4nx5v" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.123063 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swvtd\" (UniqueName: \"kubernetes.io/projected/f968882a-8e40-4760-b1fe-2d456390d30c-kube-api-access-swvtd\") pod \"node-resolver-4nx5v\" (UID: \"f968882a-8e40-4760-b1fe-2d456390d30c\") " pod="openshift-dns/node-resolver-4nx5v" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.123210 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a80722df-c977-49e0-b1ec-a83fea1c4f0b-host\") pod \"node-ca-t8mqs\" (UID: \"a80722df-c977-49e0-b1ec-a83fea1c4f0b\") " pod="openshift-image-registry/node-ca-t8mqs" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.123293 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwh9b\" (UniqueName: \"kubernetes.io/projected/a80722df-c977-49e0-b1ec-a83fea1c4f0b-kube-api-access-qwh9b\") pod \"node-ca-t8mqs\" (UID: \"a80722df-c977-49e0-b1ec-a83fea1c4f0b\") " pod="openshift-image-registry/node-ca-t8mqs" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.223978 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f968882a-8e40-4760-b1fe-2d456390d30c-hosts-file\") pod \"node-resolver-4nx5v\" (UID: \"f968882a-8e40-4760-b1fe-2d456390d30c\") " pod="openshift-dns/node-resolver-4nx5v" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.224149 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f968882a-8e40-4760-b1fe-2d456390d30c-hosts-file\") pod \"node-resolver-4nx5v\" (UID: \"f968882a-8e40-4760-b1fe-2d456390d30c\") " pod="openshift-dns/node-resolver-4nx5v" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.224223 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swvtd\" (UniqueName: \"kubernetes.io/projected/f968882a-8e40-4760-b1fe-2d456390d30c-kube-api-access-swvtd\") pod \"node-resolver-4nx5v\" (UID: \"f968882a-8e40-4760-b1fe-2d456390d30c\") " pod="openshift-dns/node-resolver-4nx5v" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.224246 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a80722df-c977-49e0-b1ec-a83fea1c4f0b-host\") pod \"node-ca-t8mqs\" (UID: \"a80722df-c977-49e0-b1ec-a83fea1c4f0b\") " pod="openshift-image-registry/node-ca-t8mqs" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.224268 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwh9b\" (UniqueName: \"kubernetes.io/projected/a80722df-c977-49e0-b1ec-a83fea1c4f0b-kube-api-access-qwh9b\") pod \"node-ca-t8mqs\" (UID: \"a80722df-c977-49e0-b1ec-a83fea1c4f0b\") " pod="openshift-image-registry/node-ca-t8mqs" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.224301 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a80722df-c977-49e0-b1ec-a83fea1c4f0b-serviceca\") pod \"node-ca-t8mqs\" (UID: \"a80722df-c977-49e0-b1ec-a83fea1c4f0b\") " pod="openshift-image-registry/node-ca-t8mqs" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.224533 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a80722df-c977-49e0-b1ec-a83fea1c4f0b-host\") pod \"node-ca-t8mqs\" (UID: \"a80722df-c977-49e0-b1ec-a83fea1c4f0b\") " pod="openshift-image-registry/node-ca-t8mqs" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.225239 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a80722df-c977-49e0-b1ec-a83fea1c4f0b-serviceca\") pod \"node-ca-t8mqs\" (UID: \"a80722df-c977-49e0-b1ec-a83fea1c4f0b\") " pod="openshift-image-registry/node-ca-t8mqs" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.247408 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swvtd\" (UniqueName: \"kubernetes.io/projected/f968882a-8e40-4760-b1fe-2d456390d30c-kube-api-access-swvtd\") pod \"node-resolver-4nx5v\" (UID: \"f968882a-8e40-4760-b1fe-2d456390d30c\") " pod="openshift-dns/node-resolver-4nx5v" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.253078 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwh9b\" (UniqueName: \"kubernetes.io/projected/a80722df-c977-49e0-b1ec-a83fea1c4f0b-kube-api-access-qwh9b\") pod \"node-ca-t8mqs\" (UID: \"a80722df-c977-49e0-b1ec-a83fea1c4f0b\") " pod="openshift-image-registry/node-ca-t8mqs" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.363454 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-t8mqs" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.372772 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-4nx5v" Feb 03 00:15:29 crc kubenswrapper[4798]: W0203 00:15:29.395527 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf968882a_8e40_4760_b1fe_2d456390d30c.slice/crio-d90f8f65cd50321735b357f5cab56227923e1d36e80ffb869b339a0362d1f2d9 WatchSource:0}: Error finding container d90f8f65cd50321735b357f5cab56227923e1d36e80ffb869b339a0362d1f2d9: Status 404 returned error can't find the container with id d90f8f65cd50321735b357f5cab56227923e1d36e80ffb869b339a0362d1f2d9 Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.460618 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-b842j"] Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.461004 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-b842j" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.470549 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.470793 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.470935 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.471008 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.471404 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.531085 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c6602c86-f236-4772-b70f-a8b4847b95dd-proxy-tls\") pod \"machine-config-daemon-b842j\" (UID: \"c6602c86-f236-4772-b70f-a8b4847b95dd\") " pod="openshift-machine-config-operator/machine-config-daemon-b842j" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.531133 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c6602c86-f236-4772-b70f-a8b4847b95dd-rootfs\") pod \"machine-config-daemon-b842j\" (UID: \"c6602c86-f236-4772-b70f-a8b4847b95dd\") " pod="openshift-machine-config-operator/machine-config-daemon-b842j" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.531168 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrcdv\" (UniqueName: \"kubernetes.io/projected/c6602c86-f236-4772-b70f-a8b4847b95dd-kube-api-access-lrcdv\") pod \"machine-config-daemon-b842j\" (UID: \"c6602c86-f236-4772-b70f-a8b4847b95dd\") " pod="openshift-machine-config-operator/machine-config-daemon-b842j" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.531210 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c6602c86-f236-4772-b70f-a8b4847b95dd-mcd-auth-proxy-config\") pod \"machine-config-daemon-b842j\" (UID: \"c6602c86-f236-4772-b70f-a8b4847b95dd\") " pod="openshift-machine-config-operator/machine-config-daemon-b842j" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.631813 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.631896 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c6602c86-f236-4772-b70f-a8b4847b95dd-rootfs\") pod \"machine-config-daemon-b842j\" (UID: \"c6602c86-f236-4772-b70f-a8b4847b95dd\") " pod="openshift-machine-config-operator/machine-config-daemon-b842j" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.631925 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.631944 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.631962 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrcdv\" (UniqueName: \"kubernetes.io/projected/c6602c86-f236-4772-b70f-a8b4847b95dd-kube-api-access-lrcdv\") pod \"machine-config-daemon-b842j\" (UID: \"c6602c86-f236-4772-b70f-a8b4847b95dd\") " pod="openshift-machine-config-operator/machine-config-daemon-b842j" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.631986 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c6602c86-f236-4772-b70f-a8b4847b95dd-mcd-auth-proxy-config\") pod \"machine-config-daemon-b842j\" (UID: \"c6602c86-f236-4772-b70f-a8b4847b95dd\") " pod="openshift-machine-config-operator/machine-config-daemon-b842j" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.632008 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c6602c86-f236-4772-b70f-a8b4847b95dd-proxy-tls\") pod \"machine-config-daemon-b842j\" (UID: \"c6602c86-f236-4772-b70f-a8b4847b95dd\") " pod="openshift-machine-config-operator/machine-config-daemon-b842j" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.632005 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c6602c86-f236-4772-b70f-a8b4847b95dd-rootfs\") pod \"machine-config-daemon-b842j\" (UID: \"c6602c86-f236-4772-b70f-a8b4847b95dd\") " pod="openshift-machine-config-operator/machine-config-daemon-b842j" Feb 03 00:15:29 crc kubenswrapper[4798]: E0203 00:15:29.632061 4798 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 00:15:29 crc kubenswrapper[4798]: E0203 00:15:29.632059 4798 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 00:15:29 crc kubenswrapper[4798]: E0203 00:15:29.632128 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:15:31.632077118 +0000 UTC m=+23.398067129 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:15:29 crc kubenswrapper[4798]: E0203 00:15:29.632180 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 00:15:31.63216857 +0000 UTC m=+23.398158581 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 00:15:29 crc kubenswrapper[4798]: E0203 00:15:29.632200 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 00:15:31.632190281 +0000 UTC m=+23.398180292 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.632771 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c6602c86-f236-4772-b70f-a8b4847b95dd-mcd-auth-proxy-config\") pod \"machine-config-daemon-b842j\" (UID: \"c6602c86-f236-4772-b70f-a8b4847b95dd\") " pod="openshift-machine-config-operator/machine-config-daemon-b842j" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.637199 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c6602c86-f236-4772-b70f-a8b4847b95dd-proxy-tls\") pod \"machine-config-daemon-b842j\" (UID: \"c6602c86-f236-4772-b70f-a8b4847b95dd\") " pod="openshift-machine-config-operator/machine-config-daemon-b842j" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.651127 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrcdv\" (UniqueName: \"kubernetes.io/projected/c6602c86-f236-4772-b70f-a8b4847b95dd-kube-api-access-lrcdv\") pod \"machine-config-daemon-b842j\" (UID: \"c6602c86-f236-4772-b70f-a8b4847b95dd\") " pod="openshift-machine-config-operator/machine-config-daemon-b842j" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.714882 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.732913 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.732976 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:15:29 crc kubenswrapper[4798]: E0203 00:15:29.733083 4798 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 00:15:29 crc kubenswrapper[4798]: E0203 00:15:29.733099 4798 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 00:15:29 crc kubenswrapper[4798]: E0203 00:15:29.733109 4798 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.733120 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:29 crc kubenswrapper[4798]: E0203 00:15:29.733152 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 00:15:31.733138775 +0000 UTC m=+23.499128786 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 00:15:29 crc kubenswrapper[4798]: E0203 00:15:29.733197 4798 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 00:15:29 crc kubenswrapper[4798]: E0203 00:15:29.733249 4798 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 00:15:29 crc kubenswrapper[4798]: E0203 00:15:29.733272 4798 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 00:15:29 crc kubenswrapper[4798]: E0203 00:15:29.733301 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 00:15:31.733292929 +0000 UTC m=+23.499282940 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.764706 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.780988 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-b842j" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.781339 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:29 crc kubenswrapper[4798]: W0203 00:15:29.794398 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6602c86_f236_4772_b70f_a8b4847b95dd.slice/crio-a46522a93526ac525a2f27370be2ef8cc8c8877d1c4840c91b5d00b96e70ff8d WatchSource:0}: Error finding container a46522a93526ac525a2f27370be2ef8cc8c8877d1c4840c91b5d00b96e70ff8d: Status 404 returned error can't find the container with id a46522a93526ac525a2f27370be2ef8cc8c8877d1c4840c91b5d00b96e70ff8d Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.808206 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d9827e9-9254-47d3-8104-063cecf114b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6167533a6a616c137b890a0878d4ca54de45315ddf8799777a78b065f287f8f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5aaaf6e417770868b0d07795a8dce7a762fff69be8f67fa8b18f2d725b3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://174155cc26bc82aa227742b9171a1bdc7e162fc52f673214b383d6fe29d6171e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78de22de18e933207990153b342930848b80ccb2a9f4111a17e195a988d4b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5985e1d80e9ba103ce9794b795647710c64f0bdf9f6b6c7ca7e3b1cc4bd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.822910 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84295e9a-e49f-4550-8b66-c13c1d9a31ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e69941f257b4b15956eceee179fe4480d582215ae076f2757f5ead357c5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a58ca61528f9b3bf28c062c16d1b924e68f3a7bc60438abbb9e6d726023b713b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75e6cbd2a5c8ecdfeb505388b2c00877ab436a5434a9ffb2c15fd1a6cbde36b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b848412f3f729b0be1246c7aa6d61d8a64e93fdbdaa5899e447dd24adb39f185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b416e7549939712a710931ff8eeed3b360c4a52f48ca3e003d48be6a9dde5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.839487 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-nhpkc"] Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.840294 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.842585 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.842596 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.842686 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.845110 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.845309 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.845861 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-ktf4c"] Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.846379 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-ktf4c" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.848682 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.849857 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.851383 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.856055 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 19:10:19.793756833 +0000 UTC Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.875141 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.887828 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.904222 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.907417 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:15:29 crc kubenswrapper[4798]: E0203 00:15:29.907597 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.907668 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:15:29 crc kubenswrapper[4798]: E0203 00:15:29.907810 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.925109 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d9827e9-9254-47d3-8104-063cecf114b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6167533a6a616c137b890a0878d4ca54de45315ddf8799777a78b065f287f8f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5aaaf6e417770868b0d07795a8dce7a762fff69be8f67fa8b18f2d725b3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://174155cc26bc82aa227742b9171a1bdc7e162fc52f673214b383d6fe29d6171e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78de22de18e933207990153b342930848b80ccb2a9f4111a17e195a988d4b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5985e1d80e9ba103ce9794b795647710c64f0bdf9f6b6c7ca7e3b1cc4bd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.935076 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/106da5aa-5f2e-4d32-b172-4844ad6de7f6-host-run-multus-certs\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.935140 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/106da5aa-5f2e-4d32-b172-4844ad6de7f6-multus-daemon-config\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.935181 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/60953d56-8dc2-4adf-96bc-078a558476e1-tuning-conf-dir\") pod \"multus-additional-cni-plugins-nhpkc\" (UID: \"60953d56-8dc2-4adf-96bc-078a558476e1\") " pod="openshift-multus/multus-additional-cni-plugins-nhpkc" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.935265 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/106da5aa-5f2e-4d32-b172-4844ad6de7f6-host-var-lib-cni-bin\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.935353 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/106da5aa-5f2e-4d32-b172-4844ad6de7f6-hostroot\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.935420 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/106da5aa-5f2e-4d32-b172-4844ad6de7f6-etc-kubernetes\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.935470 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/106da5aa-5f2e-4d32-b172-4844ad6de7f6-host-run-k8s-cni-cncf-io\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.935564 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/106da5aa-5f2e-4d32-b172-4844ad6de7f6-host-var-lib-cni-multus\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.935615 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/106da5aa-5f2e-4d32-b172-4844ad6de7f6-multus-conf-dir\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.935671 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/106da5aa-5f2e-4d32-b172-4844ad6de7f6-multus-cni-dir\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.935706 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/60953d56-8dc2-4adf-96bc-078a558476e1-os-release\") pod \"multus-additional-cni-plugins-nhpkc\" (UID: \"60953d56-8dc2-4adf-96bc-078a558476e1\") " pod="openshift-multus/multus-additional-cni-plugins-nhpkc" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.935734 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/60953d56-8dc2-4adf-96bc-078a558476e1-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-nhpkc\" (UID: \"60953d56-8dc2-4adf-96bc-078a558476e1\") " pod="openshift-multus/multus-additional-cni-plugins-nhpkc" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.935760 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/106da5aa-5f2e-4d32-b172-4844ad6de7f6-host-var-lib-kubelet\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.935804 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/60953d56-8dc2-4adf-96bc-078a558476e1-cnibin\") pod \"multus-additional-cni-plugins-nhpkc\" (UID: \"60953d56-8dc2-4adf-96bc-078a558476e1\") " pod="openshift-multus/multus-additional-cni-plugins-nhpkc" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.935828 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/106da5aa-5f2e-4d32-b172-4844ad6de7f6-cni-binary-copy\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.935845 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/106da5aa-5f2e-4d32-b172-4844ad6de7f6-host-run-netns\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.935881 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/60953d56-8dc2-4adf-96bc-078a558476e1-cni-binary-copy\") pod \"multus-additional-cni-plugins-nhpkc\" (UID: \"60953d56-8dc2-4adf-96bc-078a558476e1\") " pod="openshift-multus/multus-additional-cni-plugins-nhpkc" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.935904 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/106da5aa-5f2e-4d32-b172-4844ad6de7f6-cnibin\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.935930 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/60953d56-8dc2-4adf-96bc-078a558476e1-system-cni-dir\") pod \"multus-additional-cni-plugins-nhpkc\" (UID: \"60953d56-8dc2-4adf-96bc-078a558476e1\") " pod="openshift-multus/multus-additional-cni-plugins-nhpkc" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.935957 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/106da5aa-5f2e-4d32-b172-4844ad6de7f6-multus-socket-dir-parent\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.935998 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d58sd\" (UniqueName: \"kubernetes.io/projected/60953d56-8dc2-4adf-96bc-078a558476e1-kube-api-access-d58sd\") pod \"multus-additional-cni-plugins-nhpkc\" (UID: \"60953d56-8dc2-4adf-96bc-078a558476e1\") " pod="openshift-multus/multus-additional-cni-plugins-nhpkc" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.936022 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4dpn\" (UniqueName: \"kubernetes.io/projected/106da5aa-5f2e-4d32-b172-4844ad6de7f6-kube-api-access-k4dpn\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.936044 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/106da5aa-5f2e-4d32-b172-4844ad6de7f6-system-cni-dir\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.936088 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/106da5aa-5f2e-4d32-b172-4844ad6de7f6-os-release\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.941563 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84295e9a-e49f-4550-8b66-c13c1d9a31ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e69941f257b4b15956eceee179fe4480d582215ae076f2757f5ead357c5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a58ca61528f9b3bf28c062c16d1b924e68f3a7bc60438abbb9e6d726023b713b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75e6cbd2a5c8ecdfeb505388b2c00877ab436a5434a9ffb2c15fd1a6cbde36b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b848412f3f729b0be1246c7aa6d61d8a64e93fdbdaa5899e447dd24adb39f185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b416e7549939712a710931ff8eeed3b360c4a52f48ca3e003d48be6a9dde5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.956534 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bb16c3599d6149d6981f97d083ee866ab1f157a08b1ba9825442920b5af8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.968316 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8mqs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a80722df-c977-49e0-b1ec-a83fea1c4f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwh9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8mqs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.983462 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:29 crc kubenswrapper[4798]: I0203 00:15:29.997288 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da2defc8ae8b05759890f95063d7f6820ae261639188fbf19290f98cfcc080de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5cc605f6582cf6832be9a756695825774f5ce389556957d9769286d0e3640f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.008112 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4nx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f968882a-8e40-4760-b1fe-2d456390d30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swvtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4nx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:30Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.020563 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6602c86-f236-4772-b70f-a8b4847b95dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b842j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:30Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.037156 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/60953d56-8dc2-4adf-96bc-078a558476e1-system-cni-dir\") pod \"multus-additional-cni-plugins-nhpkc\" (UID: \"60953d56-8dc2-4adf-96bc-078a558476e1\") " pod="openshift-multus/multus-additional-cni-plugins-nhpkc" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.037210 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/106da5aa-5f2e-4d32-b172-4844ad6de7f6-multus-socket-dir-parent\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.037239 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d58sd\" (UniqueName: \"kubernetes.io/projected/60953d56-8dc2-4adf-96bc-078a558476e1-kube-api-access-d58sd\") pod \"multus-additional-cni-plugins-nhpkc\" (UID: \"60953d56-8dc2-4adf-96bc-078a558476e1\") " pod="openshift-multus/multus-additional-cni-plugins-nhpkc" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.037270 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4dpn\" (UniqueName: \"kubernetes.io/projected/106da5aa-5f2e-4d32-b172-4844ad6de7f6-kube-api-access-k4dpn\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.037300 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/106da5aa-5f2e-4d32-b172-4844ad6de7f6-system-cni-dir\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.037323 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/106da5aa-5f2e-4d32-b172-4844ad6de7f6-os-release\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.037334 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/60953d56-8dc2-4adf-96bc-078a558476e1-system-cni-dir\") pod \"multus-additional-cni-plugins-nhpkc\" (UID: \"60953d56-8dc2-4adf-96bc-078a558476e1\") " pod="openshift-multus/multus-additional-cni-plugins-nhpkc" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.037381 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/106da5aa-5f2e-4d32-b172-4844ad6de7f6-multus-daemon-config\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.037395 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/106da5aa-5f2e-4d32-b172-4844ad6de7f6-multus-socket-dir-parent\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.037449 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/106da5aa-5f2e-4d32-b172-4844ad6de7f6-host-run-multus-certs\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.037483 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/60953d56-8dc2-4adf-96bc-078a558476e1-tuning-conf-dir\") pod \"multus-additional-cni-plugins-nhpkc\" (UID: \"60953d56-8dc2-4adf-96bc-078a558476e1\") " pod="openshift-multus/multus-additional-cni-plugins-nhpkc" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.037508 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/106da5aa-5f2e-4d32-b172-4844ad6de7f6-host-var-lib-cni-bin\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.037577 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/106da5aa-5f2e-4d32-b172-4844ad6de7f6-hostroot\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.037587 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/106da5aa-5f2e-4d32-b172-4844ad6de7f6-host-run-multus-certs\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.037605 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/106da5aa-5f2e-4d32-b172-4844ad6de7f6-etc-kubernetes\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.037613 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/106da5aa-5f2e-4d32-b172-4844ad6de7f6-system-cni-dir\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.037638 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/106da5aa-5f2e-4d32-b172-4844ad6de7f6-multus-cni-dir\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.037642 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/106da5aa-5f2e-4d32-b172-4844ad6de7f6-host-var-lib-cni-bin\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.037685 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/106da5aa-5f2e-4d32-b172-4844ad6de7f6-host-run-k8s-cni-cncf-io\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.037714 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/106da5aa-5f2e-4d32-b172-4844ad6de7f6-host-var-lib-cni-multus\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.037737 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/106da5aa-5f2e-4d32-b172-4844ad6de7f6-multus-conf-dir\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.037764 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/60953d56-8dc2-4adf-96bc-078a558476e1-os-release\") pod \"multus-additional-cni-plugins-nhpkc\" (UID: \"60953d56-8dc2-4adf-96bc-078a558476e1\") " pod="openshift-multus/multus-additional-cni-plugins-nhpkc" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.037761 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/106da5aa-5f2e-4d32-b172-4844ad6de7f6-etc-kubernetes\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.037791 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/60953d56-8dc2-4adf-96bc-078a558476e1-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-nhpkc\" (UID: \"60953d56-8dc2-4adf-96bc-078a558476e1\") " pod="openshift-multus/multus-additional-cni-plugins-nhpkc" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.037825 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/106da5aa-5f2e-4d32-b172-4844ad6de7f6-hostroot\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.037826 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/106da5aa-5f2e-4d32-b172-4844ad6de7f6-host-var-lib-kubelet\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.037862 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/106da5aa-5f2e-4d32-b172-4844ad6de7f6-host-var-lib-kubelet\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.037892 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/60953d56-8dc2-4adf-96bc-078a558476e1-cnibin\") pod \"multus-additional-cni-plugins-nhpkc\" (UID: \"60953d56-8dc2-4adf-96bc-078a558476e1\") " pod="openshift-multus/multus-additional-cni-plugins-nhpkc" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.037925 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/106da5aa-5f2e-4d32-b172-4844ad6de7f6-cni-binary-copy\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.037937 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/106da5aa-5f2e-4d32-b172-4844ad6de7f6-multus-cni-dir\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.037951 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/106da5aa-5f2e-4d32-b172-4844ad6de7f6-host-run-netns\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.037953 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/106da5aa-5f2e-4d32-b172-4844ad6de7f6-os-release\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.037975 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/106da5aa-5f2e-4d32-b172-4844ad6de7f6-host-run-k8s-cni-cncf-io\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.037997 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/60953d56-8dc2-4adf-96bc-078a558476e1-cni-binary-copy\") pod \"multus-additional-cni-plugins-nhpkc\" (UID: \"60953d56-8dc2-4adf-96bc-078a558476e1\") " pod="openshift-multus/multus-additional-cni-plugins-nhpkc" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.038008 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/106da5aa-5f2e-4d32-b172-4844ad6de7f6-multus-conf-dir\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.038021 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/106da5aa-5f2e-4d32-b172-4844ad6de7f6-cnibin\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.038036 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/60953d56-8dc2-4adf-96bc-078a558476e1-os-release\") pod \"multus-additional-cni-plugins-nhpkc\" (UID: \"60953d56-8dc2-4adf-96bc-078a558476e1\") " pod="openshift-multus/multus-additional-cni-plugins-nhpkc" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.038048 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/106da5aa-5f2e-4d32-b172-4844ad6de7f6-host-var-lib-cni-multus\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.039440 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/60953d56-8dc2-4adf-96bc-078a558476e1-cnibin\") pod \"multus-additional-cni-plugins-nhpkc\" (UID: \"60953d56-8dc2-4adf-96bc-078a558476e1\") " pod="openshift-multus/multus-additional-cni-plugins-nhpkc" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.039455 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/106da5aa-5f2e-4d32-b172-4844ad6de7f6-host-run-netns\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.039556 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/106da5aa-5f2e-4d32-b172-4844ad6de7f6-cnibin\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.040009 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/106da5aa-5f2e-4d32-b172-4844ad6de7f6-cni-binary-copy\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.040550 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/60953d56-8dc2-4adf-96bc-078a558476e1-cni-binary-copy\") pod \"multus-additional-cni-plugins-nhpkc\" (UID: \"60953d56-8dc2-4adf-96bc-078a558476e1\") " pod="openshift-multus/multus-additional-cni-plugins-nhpkc" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.040781 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/60953d56-8dc2-4adf-96bc-078a558476e1-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-nhpkc\" (UID: \"60953d56-8dc2-4adf-96bc-078a558476e1\") " pod="openshift-multus/multus-additional-cni-plugins-nhpkc" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.042638 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/60953d56-8dc2-4adf-96bc-078a558476e1-tuning-conf-dir\") pod \"multus-additional-cni-plugins-nhpkc\" (UID: \"60953d56-8dc2-4adf-96bc-078a558476e1\") " pod="openshift-multus/multus-additional-cni-plugins-nhpkc" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.042674 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/106da5aa-5f2e-4d32-b172-4844ad6de7f6-multus-daemon-config\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.059270 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60953d56-8dc2-4adf-96bc-078a558476e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nhpkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:30Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.064344 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" event={"ID":"c6602c86-f236-4772-b70f-a8b4847b95dd","Type":"ContainerStarted","Data":"363ed58e5d7ec642280fa4eeb3b84af899499279e5cca2f44a3762721b45758f"} Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.064908 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" event={"ID":"c6602c86-f236-4772-b70f-a8b4847b95dd","Type":"ContainerStarted","Data":"13a062edfbf637aa7eed0473b03b10011ed3e0b8647ca701c9b67bd16abe6fd4"} Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.065137 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" event={"ID":"c6602c86-f236-4772-b70f-a8b4847b95dd","Type":"ContainerStarted","Data":"a46522a93526ac525a2f27370be2ef8cc8c8877d1c4840c91b5d00b96e70ff8d"} Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.065722 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-4nx5v" event={"ID":"f968882a-8e40-4760-b1fe-2d456390d30c","Type":"ContainerStarted","Data":"a047e7a57cedd76d9c1585aa1a50f66fc1ace45b59f9d32e284d44c940a79f7f"} Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.065827 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-4nx5v" event={"ID":"f968882a-8e40-4760-b1fe-2d456390d30c","Type":"ContainerStarted","Data":"d90f8f65cd50321735b357f5cab56227923e1d36e80ffb869b339a0362d1f2d9"} Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.071766 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-t8mqs" event={"ID":"a80722df-c977-49e0-b1ec-a83fea1c4f0b","Type":"ContainerStarted","Data":"3a6c76bd68818c8e42c934e6a93bcffb93289d3993a975f8490ce5dc2d7db601"} Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.071879 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-t8mqs" event={"ID":"a80722df-c977-49e0-b1ec-a83fea1c4f0b","Type":"ContainerStarted","Data":"2357f73ea9d23186aa17d2041f65ff1c56b958ab40fa502003f6e218698dd461"} Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.082992 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad68d98-2014-4595-bfa1-3c2d01ea9c1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ee0d4bdb8f37f7bbb7a130615b9df7a612fbcc74aadfb5eeb319935d50616d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da94f34ed139edbb39d3e777c72192e4d6d444b191e109b4c72ccb945513a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe6f4f8a090678c9eb76141cc1896ce9c032c026e768c884350cb64dace63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a3d5b07a894ad5cb3895f0d7890997de76f76e5dd6e4cf0d09096420e17a2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:30Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.090427 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d58sd\" (UniqueName: \"kubernetes.io/projected/60953d56-8dc2-4adf-96bc-078a558476e1-kube-api-access-d58sd\") pod \"multus-additional-cni-plugins-nhpkc\" (UID: \"60953d56-8dc2-4adf-96bc-078a558476e1\") " pod="openshift-multus/multus-additional-cni-plugins-nhpkc" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.099416 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4dpn\" (UniqueName: \"kubernetes.io/projected/106da5aa-5f2e-4d32-b172-4844ad6de7f6-kube-api-access-k4dpn\") pod \"multus-ktf4c\" (UID: \"106da5aa-5f2e-4d32-b172-4844ad6de7f6\") " pod="openshift-multus/multus-ktf4c" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.106756 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ktf4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"106da5aa-5f2e-4d32-b172-4844ad6de7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4dpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ktf4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:30Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.134192 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad68d98-2014-4595-bfa1-3c2d01ea9c1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ee0d4bdb8f37f7bbb7a130615b9df7a612fbcc74aadfb5eeb319935d50616d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da94f34ed139edbb39d3e777c72192e4d6d444b191e109b4c72ccb945513a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe6f4f8a090678c9eb76141cc1896ce9c032c026e768c884350cb64dace63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a3d5b07a894ad5cb3895f0d7890997de76f76e5dd6e4cf0d09096420e17a2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:30Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.150392 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ktf4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"106da5aa-5f2e-4d32-b172-4844ad6de7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4dpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ktf4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:30Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.160011 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.162847 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:30Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:30 crc kubenswrapper[4798]: W0203 00:15:30.172085 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60953d56_8dc2_4adf_96bc_078a558476e1.slice/crio-3c354a50c7868512fb6af6860cde375bd9e8026864eef073bee82010bf16912d WatchSource:0}: Error finding container 3c354a50c7868512fb6af6860cde375bd9e8026864eef073bee82010bf16912d: Status 404 returned error can't find the container with id 3c354a50c7868512fb6af6860cde375bd9e8026864eef073bee82010bf16912d Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.175206 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:30Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.177965 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-ktf4c" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.189434 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:30Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:30 crc kubenswrapper[4798]: W0203 00:15:30.197829 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod106da5aa_5f2e_4d32_b172_4844ad6de7f6.slice/crio-a22f5fe2582ef3af9966ed168e84cebfe98f5653ea2efb484d7eafa6f6f67d04 WatchSource:0}: Error finding container a22f5fe2582ef3af9966ed168e84cebfe98f5653ea2efb484d7eafa6f6f67d04: Status 404 returned error can't find the container with id a22f5fe2582ef3af9966ed168e84cebfe98f5653ea2efb484d7eafa6f6f67d04 Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.200576 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8mqs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a80722df-c977-49e0-b1ec-a83fea1c4f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6c76bd68818c8e42c934e6a93bcffb93289d3993a975f8490ce5dc2d7db601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwh9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8mqs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:30Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.217387 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gzlj4"] Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.218279 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.222275 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.222410 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.222461 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.222684 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.222800 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.222840 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.222856 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.222887 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d9827e9-9254-47d3-8104-063cecf114b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6167533a6a616c137b890a0878d4ca54de45315ddf8799777a78b065f287f8f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5aaaf6e417770868b0d07795a8dce7a762fff69be8f67fa8b18f2d725b3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://174155cc26bc82aa227742b9171a1bdc7e162fc52f673214b383d6fe29d6171e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78de22de18e933207990153b342930848b80ccb2a9f4111a17e195a988d4b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5985e1d80e9ba103ce9794b795647710c64f0bdf9f6b6c7ca7e3b1cc4bd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:30Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.238916 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84295e9a-e49f-4550-8b66-c13c1d9a31ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e69941f257b4b15956eceee179fe4480d582215ae076f2757f5ead357c5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a58ca61528f9b3bf28c062c16d1b924e68f3a7bc60438abbb9e6d726023b713b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75e6cbd2a5c8ecdfeb505388b2c00877ab436a5434a9ffb2c15fd1a6cbde36b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b848412f3f729b0be1246c7aa6d61d8a64e93fdbdaa5899e447dd24adb39f185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b416e7549939712a710931ff8eeed3b360c4a52f48ca3e003d48be6a9dde5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:30Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.251478 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bb16c3599d6149d6981f97d083ee866ab1f157a08b1ba9825442920b5af8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:30Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.261770 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6602c86-f236-4772-b70f-a8b4847b95dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b842j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:30Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.273177 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60953d56-8dc2-4adf-96bc-078a558476e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nhpkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:30Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.286024 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:30Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.299716 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da2defc8ae8b05759890f95063d7f6820ae261639188fbf19290f98cfcc080de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5cc605f6582cf6832be9a756695825774f5ce389556957d9769286d0e3640f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:30Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.312533 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4nx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f968882a-8e40-4760-b1fe-2d456390d30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a047e7a57cedd76d9c1585aa1a50f66fc1ace45b59f9d32e284d44c940a79f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swvtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4nx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:30Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.328162 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:30Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.342389 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-etc-openvswitch\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.342444 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-run-ovn\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.342476 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.342517 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-host-kubelet\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.342540 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-host-cni-netd\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.342565 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b71790a2-e390-400a-a288-2a3af8467047-env-overrides\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.342584 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b71790a2-e390-400a-a288-2a3af8467047-ovnkube-script-lib\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.342727 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-run-systemd\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.342810 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-log-socket\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.342840 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-systemd-units\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.342860 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-host-slash\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.342889 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-node-log\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.342920 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxchl\" (UniqueName: \"kubernetes.io/projected/b71790a2-e390-400a-a288-2a3af8467047-kube-api-access-lxchl\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.342996 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-host-run-netns\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.343042 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-var-lib-openvswitch\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.343095 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-run-openvswitch\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.343146 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-host-run-ovn-kubernetes\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.343199 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-host-cni-bin\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.343228 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b71790a2-e390-400a-a288-2a3af8467047-ovn-node-metrics-cert\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.343251 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b71790a2-e390-400a-a288-2a3af8467047-ovnkube-config\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.345463 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da2defc8ae8b05759890f95063d7f6820ae261639188fbf19290f98cfcc080de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5cc605f6582cf6832be9a756695825774f5ce389556957d9769286d0e3640f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:30Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.354832 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4nx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f968882a-8e40-4760-b1fe-2d456390d30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a047e7a57cedd76d9c1585aa1a50f66fc1ace45b59f9d32e284d44c940a79f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swvtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4nx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:30Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.365851 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6602c86-f236-4772-b70f-a8b4847b95dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b842j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:30Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.380590 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60953d56-8dc2-4adf-96bc-078a558476e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nhpkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:30Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.408278 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71790a2-e390-400a-a288-2a3af8467047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gzlj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:30Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.425250 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ktf4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"106da5aa-5f2e-4d32-b172-4844ad6de7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4dpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ktf4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:30Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.436714 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad68d98-2014-4595-bfa1-3c2d01ea9c1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ee0d4bdb8f37f7bbb7a130615b9df7a612fbcc74aadfb5eeb319935d50616d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da94f34ed139edbb39d3e777c72192e4d6d444b191e109b4c72ccb945513a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe6f4f8a090678c9eb76141cc1896ce9c032c026e768c884350cb64dace63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a3d5b07a894ad5cb3895f0d7890997de76f76e5dd6e4cf0d09096420e17a2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:30Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.444198 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-host-cni-netd\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.444242 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b71790a2-e390-400a-a288-2a3af8467047-env-overrides\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.444265 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b71790a2-e390-400a-a288-2a3af8467047-ovnkube-script-lib\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.444291 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-run-systemd\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.444313 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-log-socket\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.444345 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-systemd-units\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.444368 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-host-slash\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.444380 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-run-systemd\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.444387 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-node-log\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.444440 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-log-socket\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.444454 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxchl\" (UniqueName: \"kubernetes.io/projected/b71790a2-e390-400a-a288-2a3af8467047-kube-api-access-lxchl\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.444473 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-systemd-units\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.444337 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-host-cni-netd\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.444505 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-host-run-netns\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.444417 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-node-log\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.444530 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-var-lib-openvswitch\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.444584 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-host-run-ovn-kubernetes\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.444614 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-host-cni-bin\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.444641 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-run-openvswitch\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.444693 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b71790a2-e390-400a-a288-2a3af8467047-ovn-node-metrics-cert\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.444730 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b71790a2-e390-400a-a288-2a3af8467047-ovnkube-config\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.444768 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-etc-openvswitch\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.444797 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-run-ovn\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.444820 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-host-cni-bin\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.444829 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.444855 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-host-run-netns\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.444510 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-host-slash\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.444865 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-host-kubelet\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.444906 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-host-kubelet\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.444927 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-var-lib-openvswitch\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.444951 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-run-openvswitch\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.444959 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-host-run-ovn-kubernetes\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.444956 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b71790a2-e390-400a-a288-2a3af8467047-env-overrides\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.444991 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-etc-openvswitch\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.445032 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-run-ovn\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.445066 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.445185 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b71790a2-e390-400a-a288-2a3af8467047-ovnkube-script-lib\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.445589 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b71790a2-e390-400a-a288-2a3af8467047-ovnkube-config\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.447304 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:30Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.447930 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b71790a2-e390-400a-a288-2a3af8467047-ovn-node-metrics-cert\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.457900 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:30Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.465098 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxchl\" (UniqueName: \"kubernetes.io/projected/b71790a2-e390-400a-a288-2a3af8467047-kube-api-access-lxchl\") pod \"ovnkube-node-gzlj4\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.469887 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:30Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.483982 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84295e9a-e49f-4550-8b66-c13c1d9a31ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e69941f257b4b15956eceee179fe4480d582215ae076f2757f5ead357c5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a58ca61528f9b3bf28c062c16d1b924e68f3a7bc60438abbb9e6d726023b713b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75e6cbd2a5c8ecdfeb505388b2c00877ab436a5434a9ffb2c15fd1a6cbde36b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b848412f3f729b0be1246c7aa6d61d8a64e93fdbdaa5899e447dd24adb39f185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b416e7549939712a710931ff8eeed3b360c4a52f48ca3e003d48be6a9dde5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:30Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.495334 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bb16c3599d6149d6981f97d083ee866ab1f157a08b1ba9825442920b5af8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:30Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.503772 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8mqs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a80722df-c977-49e0-b1ec-a83fea1c4f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6c76bd68818c8e42c934e6a93bcffb93289d3993a975f8490ce5dc2d7db601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwh9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8mqs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:30Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.530768 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d9827e9-9254-47d3-8104-063cecf114b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6167533a6a616c137b890a0878d4ca54de45315ddf8799777a78b065f287f8f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5aaaf6e417770868b0d07795a8dce7a762fff69be8f67fa8b18f2d725b3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://174155cc26bc82aa227742b9171a1bdc7e162fc52f673214b383d6fe29d6171e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78de22de18e933207990153b342930848b80ccb2a9f4111a17e195a988d4b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5985e1d80e9ba103ce9794b795647710c64f0bdf9f6b6c7ca7e3b1cc4bd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:30Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.539894 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:30 crc kubenswrapper[4798]: W0203 00:15:30.550503 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb71790a2_e390_400a_a288_2a3af8467047.slice/crio-d23292102a8b196adcc307379dafc7b34e895ac7da120126e2860071d4026509 WatchSource:0}: Error finding container d23292102a8b196adcc307379dafc7b34e895ac7da120126e2860071d4026509: Status 404 returned error can't find the container with id d23292102a8b196adcc307379dafc7b34e895ac7da120126e2860071d4026509 Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.856645 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 19:12:17.04917317 +0000 UTC Feb 03 00:15:30 crc kubenswrapper[4798]: I0203 00:15:30.907534 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:15:30 crc kubenswrapper[4798]: E0203 00:15:30.907717 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.075990 4798 generic.go:334] "Generic (PLEG): container finished" podID="b71790a2-e390-400a-a288-2a3af8467047" containerID="ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e" exitCode=0 Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.076100 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" event={"ID":"b71790a2-e390-400a-a288-2a3af8467047","Type":"ContainerDied","Data":"ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e"} Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.076182 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" event={"ID":"b71790a2-e390-400a-a288-2a3af8467047","Type":"ContainerStarted","Data":"d23292102a8b196adcc307379dafc7b34e895ac7da120126e2860071d4026509"} Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.077387 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"93b9f9dfba8a37fa4a39f59f8b4a7cdc9c8493d91a2aef18275fae7d7989f892"} Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.079114 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ktf4c" event={"ID":"106da5aa-5f2e-4d32-b172-4844ad6de7f6","Type":"ContainerStarted","Data":"8f51a4a2b3b1ac421d5524a3776981e28eea2f2ae9d1e78977c96cc2f0faa894"} Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.079150 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ktf4c" event={"ID":"106da5aa-5f2e-4d32-b172-4844ad6de7f6","Type":"ContainerStarted","Data":"a22f5fe2582ef3af9966ed168e84cebfe98f5653ea2efb484d7eafa6f6f67d04"} Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.081281 4798 generic.go:334] "Generic (PLEG): container finished" podID="60953d56-8dc2-4adf-96bc-078a558476e1" containerID="54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74" exitCode=0 Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.081352 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" event={"ID":"60953d56-8dc2-4adf-96bc-078a558476e1","Type":"ContainerDied","Data":"54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74"} Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.081388 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" event={"ID":"60953d56-8dc2-4adf-96bc-078a558476e1","Type":"ContainerStarted","Data":"3c354a50c7868512fb6af6860cde375bd9e8026864eef073bee82010bf16912d"} Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.097263 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bb16c3599d6149d6981f97d083ee866ab1f157a08b1ba9825442920b5af8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:31Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.111869 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8mqs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a80722df-c977-49e0-b1ec-a83fea1c4f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6c76bd68818c8e42c934e6a93bcffb93289d3993a975f8490ce5dc2d7db601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwh9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8mqs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:31Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.141410 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d9827e9-9254-47d3-8104-063cecf114b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6167533a6a616c137b890a0878d4ca54de45315ddf8799777a78b065f287f8f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5aaaf6e417770868b0d07795a8dce7a762fff69be8f67fa8b18f2d725b3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://174155cc26bc82aa227742b9171a1bdc7e162fc52f673214b383d6fe29d6171e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78de22de18e933207990153b342930848b80ccb2a9f4111a17e195a988d4b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5985e1d80e9ba103ce9794b795647710c64f0bdf9f6b6c7ca7e3b1cc4bd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:31Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.167135 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84295e9a-e49f-4550-8b66-c13c1d9a31ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e69941f257b4b15956eceee179fe4480d582215ae076f2757f5ead357c5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a58ca61528f9b3bf28c062c16d1b924e68f3a7bc60438abbb9e6d726023b713b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75e6cbd2a5c8ecdfeb505388b2c00877ab436a5434a9ffb2c15fd1a6cbde36b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b848412f3f729b0be1246c7aa6d61d8a64e93fdbdaa5899e447dd24adb39f185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b416e7549939712a710931ff8eeed3b360c4a52f48ca3e003d48be6a9dde5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:31Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.183784 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4nx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f968882a-8e40-4760-b1fe-2d456390d30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a047e7a57cedd76d9c1585aa1a50f66fc1ace45b59f9d32e284d44c940a79f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swvtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4nx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:31Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.204016 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6602c86-f236-4772-b70f-a8b4847b95dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b842j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:31Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.253687 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60953d56-8dc2-4adf-96bc-078a558476e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nhpkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:31Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.283886 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71790a2-e390-400a-a288-2a3af8467047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gzlj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:31Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.306708 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:31Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.327809 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da2defc8ae8b05759890f95063d7f6820ae261639188fbf19290f98cfcc080de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5cc605f6582cf6832be9a756695825774f5ce389556957d9769286d0e3640f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:31Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.347974 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad68d98-2014-4595-bfa1-3c2d01ea9c1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ee0d4bdb8f37f7bbb7a130615b9df7a612fbcc74aadfb5eeb319935d50616d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da94f34ed139edbb39d3e777c72192e4d6d444b191e109b4c72ccb945513a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe6f4f8a090678c9eb76141cc1896ce9c032c026e768c884350cb64dace63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a3d5b07a894ad5cb3895f0d7890997de76f76e5dd6e4cf0d09096420e17a2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:31Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.362013 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ktf4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"106da5aa-5f2e-4d32-b172-4844ad6de7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4dpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ktf4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:31Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.380349 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:31Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.392380 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:31Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.405363 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:31Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.423368 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bb16c3599d6149d6981f97d083ee866ab1f157a08b1ba9825442920b5af8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:31Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.436754 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8mqs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a80722df-c977-49e0-b1ec-a83fea1c4f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6c76bd68818c8e42c934e6a93bcffb93289d3993a975f8490ce5dc2d7db601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwh9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8mqs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:31Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.455914 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d9827e9-9254-47d3-8104-063cecf114b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6167533a6a616c137b890a0878d4ca54de45315ddf8799777a78b065f287f8f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5aaaf6e417770868b0d07795a8dce7a762fff69be8f67fa8b18f2d725b3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://174155cc26bc82aa227742b9171a1bdc7e162fc52f673214b383d6fe29d6171e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78de22de18e933207990153b342930848b80ccb2a9f4111a17e195a988d4b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5985e1d80e9ba103ce9794b795647710c64f0bdf9f6b6c7ca7e3b1cc4bd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:31Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.474644 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84295e9a-e49f-4550-8b66-c13c1d9a31ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e69941f257b4b15956eceee179fe4480d582215ae076f2757f5ead357c5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a58ca61528f9b3bf28c062c16d1b924e68f3a7bc60438abbb9e6d726023b713b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75e6cbd2a5c8ecdfeb505388b2c00877ab436a5434a9ffb2c15fd1a6cbde36b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b848412f3f729b0be1246c7aa6d61d8a64e93fdbdaa5899e447dd24adb39f185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b416e7549939712a710931ff8eeed3b360c4a52f48ca3e003d48be6a9dde5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:31Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.486954 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4nx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f968882a-8e40-4760-b1fe-2d456390d30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a047e7a57cedd76d9c1585aa1a50f66fc1ace45b59f9d32e284d44c940a79f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swvtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4nx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:31Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.502015 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6602c86-f236-4772-b70f-a8b4847b95dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363ed58e5d7ec642280fa4eeb3b84af899499279e5cca2f44a3762721b45758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a062edfbf637aa7eed0473b03b10011ed3e0b8647ca701c9b67bd16abe6fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b842j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:31Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.522722 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60953d56-8dc2-4adf-96bc-078a558476e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nhpkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:31Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.544413 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71790a2-e390-400a-a288-2a3af8467047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gzlj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:31Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.558350 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:31Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.573063 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da2defc8ae8b05759890f95063d7f6820ae261639188fbf19290f98cfcc080de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5cc605f6582cf6832be9a756695825774f5ce389556957d9769286d0e3640f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:31Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.587413 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad68d98-2014-4595-bfa1-3c2d01ea9c1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ee0d4bdb8f37f7bbb7a130615b9df7a612fbcc74aadfb5eeb319935d50616d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da94f34ed139edbb39d3e777c72192e4d6d444b191e109b4c72ccb945513a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe6f4f8a090678c9eb76141cc1896ce9c032c026e768c884350cb64dace63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a3d5b07a894ad5cb3895f0d7890997de76f76e5dd6e4cf0d09096420e17a2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:31Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.602431 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ktf4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"106da5aa-5f2e-4d32-b172-4844ad6de7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f51a4a2b3b1ac421d5524a3776981e28eea2f2ae9d1e78977c96cc2f0faa894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4dpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ktf4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:31Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.615679 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:31Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.627452 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9f9dfba8a37fa4a39f59f8b4a7cdc9c8493d91a2aef18275fae7d7989f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:31Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.645155 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:31Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.658532 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.658619 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.658682 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:15:31 crc kubenswrapper[4798]: E0203 00:15:31.658783 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:15:35.658751095 +0000 UTC m=+27.424741116 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:15:31 crc kubenswrapper[4798]: E0203 00:15:31.658841 4798 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 00:15:31 crc kubenswrapper[4798]: E0203 00:15:31.658879 4798 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 00:15:31 crc kubenswrapper[4798]: E0203 00:15:31.658904 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 00:15:35.658887158 +0000 UTC m=+27.424877239 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 00:15:31 crc kubenswrapper[4798]: E0203 00:15:31.659070 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 00:15:35.659041481 +0000 UTC m=+27.425031492 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.759558 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.760164 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:15:31 crc kubenswrapper[4798]: E0203 00:15:31.759862 4798 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 00:15:31 crc kubenswrapper[4798]: E0203 00:15:31.760270 4798 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 00:15:31 crc kubenswrapper[4798]: E0203 00:15:31.760296 4798 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 00:15:31 crc kubenswrapper[4798]: E0203 00:15:31.760372 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 00:15:35.760350964 +0000 UTC m=+27.526340975 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 00:15:31 crc kubenswrapper[4798]: E0203 00:15:31.760383 4798 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 00:15:31 crc kubenswrapper[4798]: E0203 00:15:31.760413 4798 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 00:15:31 crc kubenswrapper[4798]: E0203 00:15:31.760431 4798 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 00:15:31 crc kubenswrapper[4798]: E0203 00:15:31.760507 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 00:15:35.760482417 +0000 UTC m=+27.526472598 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.857753 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 15:36:53.2799853 +0000 UTC Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.908182 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:15:31 crc kubenswrapper[4798]: I0203 00:15:31.908271 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:15:31 crc kubenswrapper[4798]: E0203 00:15:31.908402 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:15:31 crc kubenswrapper[4798]: E0203 00:15:31.908473 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:15:32 crc kubenswrapper[4798]: I0203 00:15:32.099777 4798 generic.go:334] "Generic (PLEG): container finished" podID="60953d56-8dc2-4adf-96bc-078a558476e1" containerID="602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b" exitCode=0 Feb 03 00:15:32 crc kubenswrapper[4798]: I0203 00:15:32.099878 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" event={"ID":"60953d56-8dc2-4adf-96bc-078a558476e1","Type":"ContainerDied","Data":"602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b"} Feb 03 00:15:32 crc kubenswrapper[4798]: I0203 00:15:32.103921 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" event={"ID":"b71790a2-e390-400a-a288-2a3af8467047","Type":"ContainerStarted","Data":"4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34"} Feb 03 00:15:32 crc kubenswrapper[4798]: I0203 00:15:32.103967 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" event={"ID":"b71790a2-e390-400a-a288-2a3af8467047","Type":"ContainerStarted","Data":"f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd"} Feb 03 00:15:32 crc kubenswrapper[4798]: I0203 00:15:32.103979 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" event={"ID":"b71790a2-e390-400a-a288-2a3af8467047","Type":"ContainerStarted","Data":"993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3"} Feb 03 00:15:32 crc kubenswrapper[4798]: I0203 00:15:32.103989 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" event={"ID":"b71790a2-e390-400a-a288-2a3af8467047","Type":"ContainerStarted","Data":"ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5"} Feb 03 00:15:32 crc kubenswrapper[4798]: I0203 00:15:32.121787 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60953d56-8dc2-4adf-96bc-078a558476e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nhpkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:32Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:32 crc kubenswrapper[4798]: I0203 00:15:32.148261 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71790a2-e390-400a-a288-2a3af8467047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gzlj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:32Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:32 crc kubenswrapper[4798]: I0203 00:15:32.163447 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:32Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:32 crc kubenswrapper[4798]: I0203 00:15:32.181155 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da2defc8ae8b05759890f95063d7f6820ae261639188fbf19290f98cfcc080de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5cc605f6582cf6832be9a756695825774f5ce389556957d9769286d0e3640f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:32Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:32 crc kubenswrapper[4798]: I0203 00:15:32.194405 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4nx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f968882a-8e40-4760-b1fe-2d456390d30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a047e7a57cedd76d9c1585aa1a50f66fc1ace45b59f9d32e284d44c940a79f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swvtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4nx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:32Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:32 crc kubenswrapper[4798]: I0203 00:15:32.208006 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6602c86-f236-4772-b70f-a8b4847b95dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363ed58e5d7ec642280fa4eeb3b84af899499279e5cca2f44a3762721b45758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a062edfbf637aa7eed0473b03b10011ed3e0b8647ca701c9b67bd16abe6fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b842j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:32Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:32 crc kubenswrapper[4798]: I0203 00:15:32.226381 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad68d98-2014-4595-bfa1-3c2d01ea9c1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ee0d4bdb8f37f7bbb7a130615b9df7a612fbcc74aadfb5eeb319935d50616d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da94f34ed139edbb39d3e777c72192e4d6d444b191e109b4c72ccb945513a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe6f4f8a090678c9eb76141cc1896ce9c032c026e768c884350cb64dace63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a3d5b07a894ad5cb3895f0d7890997de76f76e5dd6e4cf0d09096420e17a2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:32Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:32 crc kubenswrapper[4798]: I0203 00:15:32.242725 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ktf4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"106da5aa-5f2e-4d32-b172-4844ad6de7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f51a4a2b3b1ac421d5524a3776981e28eea2f2ae9d1e78977c96cc2f0faa894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4dpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ktf4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:32Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:32 crc kubenswrapper[4798]: I0203 00:15:32.255522 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:32Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:32 crc kubenswrapper[4798]: I0203 00:15:32.268952 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9f9dfba8a37fa4a39f59f8b4a7cdc9c8493d91a2aef18275fae7d7989f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:32Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:32 crc kubenswrapper[4798]: I0203 00:15:32.287805 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:32Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:32 crc kubenswrapper[4798]: I0203 00:15:32.314816 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d9827e9-9254-47d3-8104-063cecf114b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6167533a6a616c137b890a0878d4ca54de45315ddf8799777a78b065f287f8f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5aaaf6e417770868b0d07795a8dce7a762fff69be8f67fa8b18f2d725b3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://174155cc26bc82aa227742b9171a1bdc7e162fc52f673214b383d6fe29d6171e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78de22de18e933207990153b342930848b80ccb2a9f4111a17e195a988d4b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5985e1d80e9ba103ce9794b795647710c64f0bdf9f6b6c7ca7e3b1cc4bd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:32Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:32 crc kubenswrapper[4798]: I0203 00:15:32.333120 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84295e9a-e49f-4550-8b66-c13c1d9a31ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e69941f257b4b15956eceee179fe4480d582215ae076f2757f5ead357c5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a58ca61528f9b3bf28c062c16d1b924e68f3a7bc60438abbb9e6d726023b713b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75e6cbd2a5c8ecdfeb505388b2c00877ab436a5434a9ffb2c15fd1a6cbde36b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b848412f3f729b0be1246c7aa6d61d8a64e93fdbdaa5899e447dd24adb39f185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b416e7549939712a710931ff8eeed3b360c4a52f48ca3e003d48be6a9dde5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:32Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:32 crc kubenswrapper[4798]: I0203 00:15:32.350515 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bb16c3599d6149d6981f97d083ee866ab1f157a08b1ba9825442920b5af8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:32Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:32 crc kubenswrapper[4798]: I0203 00:15:32.363458 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8mqs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a80722df-c977-49e0-b1ec-a83fea1c4f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6c76bd68818c8e42c934e6a93bcffb93289d3993a975f8490ce5dc2d7db601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwh9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8mqs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:32Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:32 crc kubenswrapper[4798]: I0203 00:15:32.858346 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 05:15:06.351729214 +0000 UTC Feb 03 00:15:32 crc kubenswrapper[4798]: I0203 00:15:32.907898 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:15:32 crc kubenswrapper[4798]: E0203 00:15:32.908041 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:15:33 crc kubenswrapper[4798]: I0203 00:15:33.112292 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" event={"ID":"b71790a2-e390-400a-a288-2a3af8467047","Type":"ContainerStarted","Data":"f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848"} Feb 03 00:15:33 crc kubenswrapper[4798]: I0203 00:15:33.112357 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" event={"ID":"b71790a2-e390-400a-a288-2a3af8467047","Type":"ContainerStarted","Data":"b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8"} Feb 03 00:15:33 crc kubenswrapper[4798]: I0203 00:15:33.114957 4798 generic.go:334] "Generic (PLEG): container finished" podID="60953d56-8dc2-4adf-96bc-078a558476e1" containerID="bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8" exitCode=0 Feb 03 00:15:33 crc kubenswrapper[4798]: I0203 00:15:33.115011 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" event={"ID":"60953d56-8dc2-4adf-96bc-078a558476e1","Type":"ContainerDied","Data":"bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8"} Feb 03 00:15:33 crc kubenswrapper[4798]: I0203 00:15:33.134810 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad68d98-2014-4595-bfa1-3c2d01ea9c1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ee0d4bdb8f37f7bbb7a130615b9df7a612fbcc74aadfb5eeb319935d50616d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da94f34ed139edbb39d3e777c72192e4d6d444b191e109b4c72ccb945513a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe6f4f8a090678c9eb76141cc1896ce9c032c026e768c884350cb64dace63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a3d5b07a894ad5cb3895f0d7890997de76f76e5dd6e4cf0d09096420e17a2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:33Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:33 crc kubenswrapper[4798]: I0203 00:15:33.152173 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ktf4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"106da5aa-5f2e-4d32-b172-4844ad6de7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f51a4a2b3b1ac421d5524a3776981e28eea2f2ae9d1e78977c96cc2f0faa894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4dpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ktf4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:33Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:33 crc kubenswrapper[4798]: I0203 00:15:33.170834 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:33Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:33 crc kubenswrapper[4798]: I0203 00:15:33.183596 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9f9dfba8a37fa4a39f59f8b4a7cdc9c8493d91a2aef18275fae7d7989f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:33Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:33 crc kubenswrapper[4798]: I0203 00:15:33.203187 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:33Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:33 crc kubenswrapper[4798]: I0203 00:15:33.220005 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bb16c3599d6149d6981f97d083ee866ab1f157a08b1ba9825442920b5af8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:33Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:33 crc kubenswrapper[4798]: I0203 00:15:33.230749 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8mqs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a80722df-c977-49e0-b1ec-a83fea1c4f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6c76bd68818c8e42c934e6a93bcffb93289d3993a975f8490ce5dc2d7db601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwh9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8mqs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:33Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:33 crc kubenswrapper[4798]: I0203 00:15:33.252341 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d9827e9-9254-47d3-8104-063cecf114b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6167533a6a616c137b890a0878d4ca54de45315ddf8799777a78b065f287f8f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5aaaf6e417770868b0d07795a8dce7a762fff69be8f67fa8b18f2d725b3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://174155cc26bc82aa227742b9171a1bdc7e162fc52f673214b383d6fe29d6171e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78de22de18e933207990153b342930848b80ccb2a9f4111a17e195a988d4b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5985e1d80e9ba103ce9794b795647710c64f0bdf9f6b6c7ca7e3b1cc4bd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:33Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:33 crc kubenswrapper[4798]: I0203 00:15:33.267151 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84295e9a-e49f-4550-8b66-c13c1d9a31ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e69941f257b4b15956eceee179fe4480d582215ae076f2757f5ead357c5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a58ca61528f9b3bf28c062c16d1b924e68f3a7bc60438abbb9e6d726023b713b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75e6cbd2a5c8ecdfeb505388b2c00877ab436a5434a9ffb2c15fd1a6cbde36b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b848412f3f729b0be1246c7aa6d61d8a64e93fdbdaa5899e447dd24adb39f185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b416e7549939712a710931ff8eeed3b360c4a52f48ca3e003d48be6a9dde5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:33Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:33 crc kubenswrapper[4798]: I0203 00:15:33.276771 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4nx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f968882a-8e40-4760-b1fe-2d456390d30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a047e7a57cedd76d9c1585aa1a50f66fc1ace45b59f9d32e284d44c940a79f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swvtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4nx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:33Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:33 crc kubenswrapper[4798]: I0203 00:15:33.288473 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6602c86-f236-4772-b70f-a8b4847b95dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363ed58e5d7ec642280fa4eeb3b84af899499279e5cca2f44a3762721b45758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a062edfbf637aa7eed0473b03b10011ed3e0b8647ca701c9b67bd16abe6fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b842j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:33Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:33 crc kubenswrapper[4798]: I0203 00:15:33.304374 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60953d56-8dc2-4adf-96bc-078a558476e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nhpkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:33Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:33 crc kubenswrapper[4798]: I0203 00:15:33.321939 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71790a2-e390-400a-a288-2a3af8467047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gzlj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:33Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:33 crc kubenswrapper[4798]: I0203 00:15:33.336405 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:33Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:33 crc kubenswrapper[4798]: I0203 00:15:33.350178 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da2defc8ae8b05759890f95063d7f6820ae261639188fbf19290f98cfcc080de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5cc605f6582cf6832be9a756695825774f5ce389556957d9769286d0e3640f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:33Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:33 crc kubenswrapper[4798]: I0203 00:15:33.859355 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 16:19:23.133133472 +0000 UTC Feb 03 00:15:33 crc kubenswrapper[4798]: I0203 00:15:33.907384 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:15:33 crc kubenswrapper[4798]: I0203 00:15:33.907436 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:15:33 crc kubenswrapper[4798]: E0203 00:15:33.907593 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:15:33 crc kubenswrapper[4798]: E0203 00:15:33.907839 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.127432 4798 generic.go:334] "Generic (PLEG): container finished" podID="60953d56-8dc2-4adf-96bc-078a558476e1" containerID="c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3" exitCode=0 Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.127473 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" event={"ID":"60953d56-8dc2-4adf-96bc-078a558476e1","Type":"ContainerDied","Data":"c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3"} Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.145797 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84295e9a-e49f-4550-8b66-c13c1d9a31ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e69941f257b4b15956eceee179fe4480d582215ae076f2757f5ead357c5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a58ca61528f9b3bf28c062c16d1b924e68f3a7bc60438abbb9e6d726023b713b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75e6cbd2a5c8ecdfeb505388b2c00877ab436a5434a9ffb2c15fd1a6cbde36b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b848412f3f729b0be1246c7aa6d61d8a64e93fdbdaa5899e447dd24adb39f185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b416e7549939712a710931ff8eeed3b360c4a52f48ca3e003d48be6a9dde5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:34Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.162246 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bb16c3599d6149d6981f97d083ee866ab1f157a08b1ba9825442920b5af8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:34Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.172661 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8mqs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a80722df-c977-49e0-b1ec-a83fea1c4f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6c76bd68818c8e42c934e6a93bcffb93289d3993a975f8490ce5dc2d7db601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwh9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8mqs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:34Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.196764 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d9827e9-9254-47d3-8104-063cecf114b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6167533a6a616c137b890a0878d4ca54de45315ddf8799777a78b065f287f8f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5aaaf6e417770868b0d07795a8dce7a762fff69be8f67fa8b18f2d725b3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://174155cc26bc82aa227742b9171a1bdc7e162fc52f673214b383d6fe29d6171e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78de22de18e933207990153b342930848b80ccb2a9f4111a17e195a988d4b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5985e1d80e9ba103ce9794b795647710c64f0bdf9f6b6c7ca7e3b1cc4bd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:34Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.210553 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:34Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.226784 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da2defc8ae8b05759890f95063d7f6820ae261639188fbf19290f98cfcc080de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5cc605f6582cf6832be9a756695825774f5ce389556957d9769286d0e3640f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:34Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.238742 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4nx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f968882a-8e40-4760-b1fe-2d456390d30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a047e7a57cedd76d9c1585aa1a50f66fc1ace45b59f9d32e284d44c940a79f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swvtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4nx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:34Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.248757 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6602c86-f236-4772-b70f-a8b4847b95dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363ed58e5d7ec642280fa4eeb3b84af899499279e5cca2f44a3762721b45758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a062edfbf637aa7eed0473b03b10011ed3e0b8647ca701c9b67bd16abe6fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b842j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:34Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.261054 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60953d56-8dc2-4adf-96bc-078a558476e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nhpkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:34Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.277849 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71790a2-e390-400a-a288-2a3af8467047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gzlj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:34Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.288569 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ktf4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"106da5aa-5f2e-4d32-b172-4844ad6de7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f51a4a2b3b1ac421d5524a3776981e28eea2f2ae9d1e78977c96cc2f0faa894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4dpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ktf4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:34Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.302288 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad68d98-2014-4595-bfa1-3c2d01ea9c1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ee0d4bdb8f37f7bbb7a130615b9df7a612fbcc74aadfb5eeb319935d50616d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da94f34ed139edbb39d3e777c72192e4d6d444b191e109b4c72ccb945513a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe6f4f8a090678c9eb76141cc1896ce9c032c026e768c884350cb64dace63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a3d5b07a894ad5cb3895f0d7890997de76f76e5dd6e4cf0d09096420e17a2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:34Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.314906 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9f9dfba8a37fa4a39f59f8b4a7cdc9c8493d91a2aef18275fae7d7989f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:34Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.326114 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:34Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.336762 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:34Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.377568 4798 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.380092 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.380150 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.380168 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.380437 4798 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.389421 4798 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.389798 4798 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.390913 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.390946 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.390957 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.390972 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.390984 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:34Z","lastTransitionTime":"2026-02-03T00:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:34 crc kubenswrapper[4798]: E0203 00:15:34.405084 4798 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a6f95815-93e8-4b6e-91cb-e8db1e0a2c7b\\\",\\\"systemUUID\\\":\\\"d689d10a-78fe-472b-864b-496c283a966b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:34Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.408298 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.408336 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.408348 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.408363 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.408375 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:34Z","lastTransitionTime":"2026-02-03T00:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:34 crc kubenswrapper[4798]: E0203 00:15:34.422036 4798 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a6f95815-93e8-4b6e-91cb-e8db1e0a2c7b\\\",\\\"systemUUID\\\":\\\"d689d10a-78fe-472b-864b-496c283a966b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:34Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.426240 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.426267 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.426276 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.426288 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.426299 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:34Z","lastTransitionTime":"2026-02-03T00:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:34 crc kubenswrapper[4798]: E0203 00:15:34.439709 4798 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a6f95815-93e8-4b6e-91cb-e8db1e0a2c7b\\\",\\\"systemUUID\\\":\\\"d689d10a-78fe-472b-864b-496c283a966b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:34Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.443431 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.443489 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.443506 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.443527 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.443546 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:34Z","lastTransitionTime":"2026-02-03T00:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:34 crc kubenswrapper[4798]: E0203 00:15:34.458450 4798 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a6f95815-93e8-4b6e-91cb-e8db1e0a2c7b\\\",\\\"systemUUID\\\":\\\"d689d10a-78fe-472b-864b-496c283a966b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:34Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.462007 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.462029 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.462037 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.462049 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.462059 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:34Z","lastTransitionTime":"2026-02-03T00:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:34 crc kubenswrapper[4798]: E0203 00:15:34.476623 4798 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a6f95815-93e8-4b6e-91cb-e8db1e0a2c7b\\\",\\\"systemUUID\\\":\\\"d689d10a-78fe-472b-864b-496c283a966b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:34Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:34 crc kubenswrapper[4798]: E0203 00:15:34.476780 4798 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.478813 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.478845 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.478857 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.478872 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.478884 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:34Z","lastTransitionTime":"2026-02-03T00:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.581005 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.581043 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.581051 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.581066 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.581076 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:34Z","lastTransitionTime":"2026-02-03T00:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.683521 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.683560 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.683572 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.683588 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.683600 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:34Z","lastTransitionTime":"2026-02-03T00:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.786268 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.786300 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.786311 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.786325 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.786335 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:34Z","lastTransitionTime":"2026-02-03T00:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.860705 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 08:47:52.335104985 +0000 UTC Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.888903 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.888957 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.888968 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.888994 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.889008 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:34Z","lastTransitionTime":"2026-02-03T00:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.907798 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:15:34 crc kubenswrapper[4798]: E0203 00:15:34.907956 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.991392 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.991438 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.991449 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.991465 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:34 crc kubenswrapper[4798]: I0203 00:15:34.991476 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:34Z","lastTransitionTime":"2026-02-03T00:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.093977 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.094057 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.094071 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.094091 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.094106 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:35Z","lastTransitionTime":"2026-02-03T00:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.138401 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" event={"ID":"b71790a2-e390-400a-a288-2a3af8467047","Type":"ContainerStarted","Data":"5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019"} Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.141572 4798 generic.go:334] "Generic (PLEG): container finished" podID="60953d56-8dc2-4adf-96bc-078a558476e1" containerID="8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79" exitCode=0 Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.141628 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" event={"ID":"60953d56-8dc2-4adf-96bc-078a558476e1","Type":"ContainerDied","Data":"8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79"} Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.154477 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad68d98-2014-4595-bfa1-3c2d01ea9c1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ee0d4bdb8f37f7bbb7a130615b9df7a612fbcc74aadfb5eeb319935d50616d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da94f34ed139edbb39d3e777c72192e4d6d444b191e109b4c72ccb945513a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe6f4f8a090678c9eb76141cc1896ce9c032c026e768c884350cb64dace63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a3d5b07a894ad5cb3895f0d7890997de76f76e5dd6e4cf0d09096420e17a2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:35Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.166805 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ktf4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"106da5aa-5f2e-4d32-b172-4844ad6de7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f51a4a2b3b1ac421d5524a3776981e28eea2f2ae9d1e78977c96cc2f0faa894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4dpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ktf4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:35Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.183872 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:35Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.197714 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:35Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.200491 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.200529 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.200540 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.200555 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.200568 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:35Z","lastTransitionTime":"2026-02-03T00:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.210751 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9f9dfba8a37fa4a39f59f8b4a7cdc9c8493d91a2aef18275fae7d7989f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:35Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.224986 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bb16c3599d6149d6981f97d083ee866ab1f157a08b1ba9825442920b5af8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:35Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.235386 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8mqs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a80722df-c977-49e0-b1ec-a83fea1c4f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6c76bd68818c8e42c934e6a93bcffb93289d3993a975f8490ce5dc2d7db601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwh9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8mqs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:35Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.258399 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d9827e9-9254-47d3-8104-063cecf114b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6167533a6a616c137b890a0878d4ca54de45315ddf8799777a78b065f287f8f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5aaaf6e417770868b0d07795a8dce7a762fff69be8f67fa8b18f2d725b3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://174155cc26bc82aa227742b9171a1bdc7e162fc52f673214b383d6fe29d6171e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78de22de18e933207990153b342930848b80ccb2a9f4111a17e195a988d4b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5985e1d80e9ba103ce9794b795647710c64f0bdf9f6b6c7ca7e3b1cc4bd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:35Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.271927 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84295e9a-e49f-4550-8b66-c13c1d9a31ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e69941f257b4b15956eceee179fe4480d582215ae076f2757f5ead357c5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a58ca61528f9b3bf28c062c16d1b924e68f3a7bc60438abbb9e6d726023b713b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75e6cbd2a5c8ecdfeb505388b2c00877ab436a5434a9ffb2c15fd1a6cbde36b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b848412f3f729b0be1246c7aa6d61d8a64e93fdbdaa5899e447dd24adb39f185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b416e7549939712a710931ff8eeed3b360c4a52f48ca3e003d48be6a9dde5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:35Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.285439 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da2defc8ae8b05759890f95063d7f6820ae261639188fbf19290f98cfcc080de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5cc605f6582cf6832be9a756695825774f5ce389556957d9769286d0e3640f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:35Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.297266 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4nx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f968882a-8e40-4760-b1fe-2d456390d30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a047e7a57cedd76d9c1585aa1a50f66fc1ace45b59f9d32e284d44c940a79f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swvtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4nx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:35Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.302717 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.302766 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.302779 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.302799 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.302813 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:35Z","lastTransitionTime":"2026-02-03T00:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.311150 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6602c86-f236-4772-b70f-a8b4847b95dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363ed58e5d7ec642280fa4eeb3b84af899499279e5cca2f44a3762721b45758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a062edfbf637aa7eed0473b03b10011ed3e0b8647ca701c9b67bd16abe6fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b842j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:35Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.326287 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60953d56-8dc2-4adf-96bc-078a558476e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nhpkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:35Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.352091 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71790a2-e390-400a-a288-2a3af8467047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gzlj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:35Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.370325 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:35Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.405517 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.405552 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.405562 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.405576 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.405586 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:35Z","lastTransitionTime":"2026-02-03T00:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.507708 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.507749 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.507762 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.507781 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.507793 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:35Z","lastTransitionTime":"2026-02-03T00:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.610794 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.610851 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.610869 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.610891 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.610909 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:35Z","lastTransitionTime":"2026-02-03T00:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.698830 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.699015 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:15:35 crc kubenswrapper[4798]: E0203 00:15:35.699079 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:15:43.69903577 +0000 UTC m=+35.465025822 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:15:35 crc kubenswrapper[4798]: E0203 00:15:35.699126 4798 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 00:15:35 crc kubenswrapper[4798]: E0203 00:15:35.699210 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 00:15:43.699188924 +0000 UTC m=+35.465178965 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.699241 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:15:35 crc kubenswrapper[4798]: E0203 00:15:35.699387 4798 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 00:15:35 crc kubenswrapper[4798]: E0203 00:15:35.699447 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 00:15:43.699432171 +0000 UTC m=+35.465422212 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.714104 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.714136 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.714144 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.714157 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.714168 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:35Z","lastTransitionTime":"2026-02-03T00:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.800164 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.800242 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:15:35 crc kubenswrapper[4798]: E0203 00:15:35.800324 4798 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 00:15:35 crc kubenswrapper[4798]: E0203 00:15:35.800352 4798 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 00:15:35 crc kubenswrapper[4798]: E0203 00:15:35.800365 4798 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 00:15:35 crc kubenswrapper[4798]: E0203 00:15:35.800334 4798 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 00:15:35 crc kubenswrapper[4798]: E0203 00:15:35.800434 4798 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 00:15:35 crc kubenswrapper[4798]: E0203 00:15:35.800444 4798 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 00:15:35 crc kubenswrapper[4798]: E0203 00:15:35.800421 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 00:15:43.800404925 +0000 UTC m=+35.566394936 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 00:15:35 crc kubenswrapper[4798]: E0203 00:15:35.800481 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 00:15:43.800470267 +0000 UTC m=+35.566460278 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.817217 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.817252 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.817262 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.817278 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.817289 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:35Z","lastTransitionTime":"2026-02-03T00:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.861241 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 08:36:13.198298788 +0000 UTC Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.908016 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.908102 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:15:35 crc kubenswrapper[4798]: E0203 00:15:35.908192 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:15:35 crc kubenswrapper[4798]: E0203 00:15:35.908265 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.919977 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.920055 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.920072 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.920098 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:35 crc kubenswrapper[4798]: I0203 00:15:35.920111 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:35Z","lastTransitionTime":"2026-02-03T00:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.023259 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.023354 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.023379 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.023409 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.023432 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:36Z","lastTransitionTime":"2026-02-03T00:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.086235 4798 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.125594 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.125696 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.125723 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.125754 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.125777 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:36Z","lastTransitionTime":"2026-02-03T00:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.147697 4798 generic.go:334] "Generic (PLEG): container finished" podID="60953d56-8dc2-4adf-96bc-078a558476e1" containerID="50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e" exitCode=0 Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.147759 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" event={"ID":"60953d56-8dc2-4adf-96bc-078a558476e1","Type":"ContainerDied","Data":"50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e"} Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.163093 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:36Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.184450 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da2defc8ae8b05759890f95063d7f6820ae261639188fbf19290f98cfcc080de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5cc605f6582cf6832be9a756695825774f5ce389556957d9769286d0e3640f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:36Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.201043 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4nx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f968882a-8e40-4760-b1fe-2d456390d30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a047e7a57cedd76d9c1585aa1a50f66fc1ace45b59f9d32e284d44c940a79f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swvtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4nx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:36Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.214806 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6602c86-f236-4772-b70f-a8b4847b95dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363ed58e5d7ec642280fa4eeb3b84af899499279e5cca2f44a3762721b45758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a062edfbf637aa7eed0473b03b10011ed3e0b8647ca701c9b67bd16abe6fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b842j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:36Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.228830 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.228882 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.228899 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.228922 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.228941 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:36Z","lastTransitionTime":"2026-02-03T00:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.238113 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60953d56-8dc2-4adf-96bc-078a558476e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nhpkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:36Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.261563 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71790a2-e390-400a-a288-2a3af8467047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gzlj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:36Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.280773 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad68d98-2014-4595-bfa1-3c2d01ea9c1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ee0d4bdb8f37f7bbb7a130615b9df7a612fbcc74aadfb5eeb319935d50616d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da94f34ed139edbb39d3e777c72192e4d6d444b191e109b4c72ccb945513a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe6f4f8a090678c9eb76141cc1896ce9c032c026e768c884350cb64dace63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a3d5b07a894ad5cb3895f0d7890997de76f76e5dd6e4cf0d09096420e17a2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:36Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.301077 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ktf4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"106da5aa-5f2e-4d32-b172-4844ad6de7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f51a4a2b3b1ac421d5524a3776981e28eea2f2ae9d1e78977c96cc2f0faa894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4dpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ktf4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:36Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.316591 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:36Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.328134 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9f9dfba8a37fa4a39f59f8b4a7cdc9c8493d91a2aef18275fae7d7989f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:36Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.332524 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.332567 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.332582 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.332597 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.332607 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:36Z","lastTransitionTime":"2026-02-03T00:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.340202 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:36Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.365392 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d9827e9-9254-47d3-8104-063cecf114b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6167533a6a616c137b890a0878d4ca54de45315ddf8799777a78b065f287f8f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5aaaf6e417770868b0d07795a8dce7a762fff69be8f67fa8b18f2d725b3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://174155cc26bc82aa227742b9171a1bdc7e162fc52f673214b383d6fe29d6171e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78de22de18e933207990153b342930848b80ccb2a9f4111a17e195a988d4b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5985e1d80e9ba103ce9794b795647710c64f0bdf9f6b6c7ca7e3b1cc4bd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:36Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.379985 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84295e9a-e49f-4550-8b66-c13c1d9a31ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e69941f257b4b15956eceee179fe4480d582215ae076f2757f5ead357c5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a58ca61528f9b3bf28c062c16d1b924e68f3a7bc60438abbb9e6d726023b713b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75e6cbd2a5c8ecdfeb505388b2c00877ab436a5434a9ffb2c15fd1a6cbde36b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b848412f3f729b0be1246c7aa6d61d8a64e93fdbdaa5899e447dd24adb39f185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b416e7549939712a710931ff8eeed3b360c4a52f48ca3e003d48be6a9dde5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:36Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.394306 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bb16c3599d6149d6981f97d083ee866ab1f157a08b1ba9825442920b5af8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:36Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.404102 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8mqs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a80722df-c977-49e0-b1ec-a83fea1c4f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6c76bd68818c8e42c934e6a93bcffb93289d3993a975f8490ce5dc2d7db601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwh9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8mqs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:36Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.435288 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.435330 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.435343 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.435372 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.435383 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:36Z","lastTransitionTime":"2026-02-03T00:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.538698 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.538746 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.538758 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.538774 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.538789 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:36Z","lastTransitionTime":"2026-02-03T00:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.641995 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.642047 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.642059 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.642078 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.642091 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:36Z","lastTransitionTime":"2026-02-03T00:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.744004 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.744054 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.744066 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.744083 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.744095 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:36Z","lastTransitionTime":"2026-02-03T00:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.847377 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.847438 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.847450 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.847471 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.847483 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:36Z","lastTransitionTime":"2026-02-03T00:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.861693 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 08:40:19.115668293 +0000 UTC Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.907299 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:15:36 crc kubenswrapper[4798]: E0203 00:15:36.907491 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.951455 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.951491 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.951501 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.951520 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:36 crc kubenswrapper[4798]: I0203 00:15:36.951541 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:36Z","lastTransitionTime":"2026-02-03T00:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.054319 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.054398 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.054409 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.054423 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.054434 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:37Z","lastTransitionTime":"2026-02-03T00:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.156354 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.156412 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.156429 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.156453 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.156465 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:37Z","lastTransitionTime":"2026-02-03T00:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.158527 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" event={"ID":"b71790a2-e390-400a-a288-2a3af8467047","Type":"ContainerStarted","Data":"21f82fb24c3cc8120b8961e97b13d0ac424998a4e57ccaeecf00afd91f1fdec2"} Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.158798 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.162863 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" event={"ID":"60953d56-8dc2-4adf-96bc-078a558476e1","Type":"ContainerStarted","Data":"8504c36033083a61ef0f41fbba999a505d9d387e5c23264a35125fd50e69bf2d"} Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.173903 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da2defc8ae8b05759890f95063d7f6820ae261639188fbf19290f98cfcc080de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5cc605f6582cf6832be9a756695825774f5ce389556957d9769286d0e3640f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:37Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.188119 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.188885 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4nx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f968882a-8e40-4760-b1fe-2d456390d30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a047e7a57cedd76d9c1585aa1a50f66fc1ace45b59f9d32e284d44c940a79f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swvtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4nx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:37Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.202186 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6602c86-f236-4772-b70f-a8b4847b95dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363ed58e5d7ec642280fa4eeb3b84af899499279e5cca2f44a3762721b45758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a062edfbf637aa7eed0473b03b10011ed3e0b8647ca701c9b67bd16abe6fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b842j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:37Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.220120 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60953d56-8dc2-4adf-96bc-078a558476e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nhpkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:37Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.243388 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71790a2-e390-400a-a288-2a3af8467047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21f82fb24c3cc8120b8961e97b13d0ac424998a4e57ccaeecf00afd91f1fdec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gzlj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:37Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.256770 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:37Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.258516 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.258539 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.258548 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.258561 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.258573 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:37Z","lastTransitionTime":"2026-02-03T00:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.275643 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad68d98-2014-4595-bfa1-3c2d01ea9c1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ee0d4bdb8f37f7bbb7a130615b9df7a612fbcc74aadfb5eeb319935d50616d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da94f34ed139edbb39d3e777c72192e4d6d444b191e109b4c72ccb945513a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe6f4f8a090678c9eb76141cc1896ce9c032c026e768c884350cb64dace63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a3d5b07a894ad5cb3895f0d7890997de76f76e5dd6e4cf0d09096420e17a2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:37Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.293540 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ktf4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"106da5aa-5f2e-4d32-b172-4844ad6de7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f51a4a2b3b1ac421d5524a3776981e28eea2f2ae9d1e78977c96cc2f0faa894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4dpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ktf4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:37Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.308703 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:37Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.327898 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:37Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.340563 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9f9dfba8a37fa4a39f59f8b4a7cdc9c8493d91a2aef18275fae7d7989f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:37Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.362402 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.362448 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.362459 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.362479 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.362496 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:37Z","lastTransitionTime":"2026-02-03T00:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.374093 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bb16c3599d6149d6981f97d083ee866ab1f157a08b1ba9825442920b5af8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:37Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.390286 4798 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.394277 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8mqs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a80722df-c977-49e0-b1ec-a83fea1c4f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6c76bd68818c8e42c934e6a93bcffb93289d3993a975f8490ce5dc2d7db601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwh9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8mqs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:37Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.433169 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d9827e9-9254-47d3-8104-063cecf114b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6167533a6a616c137b890a0878d4ca54de45315ddf8799777a78b065f287f8f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5aaaf6e417770868b0d07795a8dce7a762fff69be8f67fa8b18f2d725b3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://174155cc26bc82aa227742b9171a1bdc7e162fc52f673214b383d6fe29d6171e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78de22de18e933207990153b342930848b80ccb2a9f4111a17e195a988d4b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5985e1d80e9ba103ce9794b795647710c64f0bdf9f6b6c7ca7e3b1cc4bd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:37Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.448113 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84295e9a-e49f-4550-8b66-c13c1d9a31ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e69941f257b4b15956eceee179fe4480d582215ae076f2757f5ead357c5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a58ca61528f9b3bf28c062c16d1b924e68f3a7bc60438abbb9e6d726023b713b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75e6cbd2a5c8ecdfeb505388b2c00877ab436a5434a9ffb2c15fd1a6cbde36b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b848412f3f729b0be1246c7aa6d61d8a64e93fdbdaa5899e447dd24adb39f185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b416e7549939712a710931ff8eeed3b360c4a52f48ca3e003d48be6a9dde5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:37Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.461359 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:37Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.465265 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.465303 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.465315 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.465333 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.465347 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:37Z","lastTransitionTime":"2026-02-03T00:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.472834 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9f9dfba8a37fa4a39f59f8b4a7cdc9c8493d91a2aef18275fae7d7989f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:37Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.486216 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:37Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.506980 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d9827e9-9254-47d3-8104-063cecf114b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6167533a6a616c137b890a0878d4ca54de45315ddf8799777a78b065f287f8f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5aaaf6e417770868b0d07795a8dce7a762fff69be8f67fa8b18f2d725b3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://174155cc26bc82aa227742b9171a1bdc7e162fc52f673214b383d6fe29d6171e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78de22de18e933207990153b342930848b80ccb2a9f4111a17e195a988d4b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5985e1d80e9ba103ce9794b795647710c64f0bdf9f6b6c7ca7e3b1cc4bd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:37Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.520788 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84295e9a-e49f-4550-8b66-c13c1d9a31ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e69941f257b4b15956eceee179fe4480d582215ae076f2757f5ead357c5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a58ca61528f9b3bf28c062c16d1b924e68f3a7bc60438abbb9e6d726023b713b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75e6cbd2a5c8ecdfeb505388b2c00877ab436a5434a9ffb2c15fd1a6cbde36b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b848412f3f729b0be1246c7aa6d61d8a64e93fdbdaa5899e447dd24adb39f185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b416e7549939712a710931ff8eeed3b360c4a52f48ca3e003d48be6a9dde5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:37Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.536205 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bb16c3599d6149d6981f97d083ee866ab1f157a08b1ba9825442920b5af8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:37Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.548478 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8mqs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a80722df-c977-49e0-b1ec-a83fea1c4f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6c76bd68818c8e42c934e6a93bcffb93289d3993a975f8490ce5dc2d7db601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwh9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8mqs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:37Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.566794 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:37Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.576584 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.576632 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.576643 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.576677 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.576689 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:37Z","lastTransitionTime":"2026-02-03T00:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.581021 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da2defc8ae8b05759890f95063d7f6820ae261639188fbf19290f98cfcc080de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5cc605f6582cf6832be9a756695825774f5ce389556957d9769286d0e3640f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:37Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.592718 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4nx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f968882a-8e40-4760-b1fe-2d456390d30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a047e7a57cedd76d9c1585aa1a50f66fc1ace45b59f9d32e284d44c940a79f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swvtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4nx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:37Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.607005 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6602c86-f236-4772-b70f-a8b4847b95dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363ed58e5d7ec642280fa4eeb3b84af899499279e5cca2f44a3762721b45758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a062edfbf637aa7eed0473b03b10011ed3e0b8647ca701c9b67bd16abe6fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b842j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:37Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.622727 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60953d56-8dc2-4adf-96bc-078a558476e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8504c36033083a61ef0f41fbba999a505d9d387e5c23264a35125fd50e69bf2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nhpkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:37Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.642426 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71790a2-e390-400a-a288-2a3af8467047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21f82fb24c3cc8120b8961e97b13d0ac424998a4e57ccaeecf00afd91f1fdec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gzlj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:37Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.657785 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad68d98-2014-4595-bfa1-3c2d01ea9c1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ee0d4bdb8f37f7bbb7a130615b9df7a612fbcc74aadfb5eeb319935d50616d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da94f34ed139edbb39d3e777c72192e4d6d444b191e109b4c72ccb945513a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe6f4f8a090678c9eb76141cc1896ce9c032c026e768c884350cb64dace63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a3d5b07a894ad5cb3895f0d7890997de76f76e5dd6e4cf0d09096420e17a2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:37Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.672513 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ktf4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"106da5aa-5f2e-4d32-b172-4844ad6de7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f51a4a2b3b1ac421d5524a3776981e28eea2f2ae9d1e78977c96cc2f0faa894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4dpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ktf4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:37Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.679888 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.679932 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.679941 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.679961 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.679971 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:37Z","lastTransitionTime":"2026-02-03T00:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.784139 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.784186 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.784198 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.784218 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.784231 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:37Z","lastTransitionTime":"2026-02-03T00:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.862100 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 08:22:34.086331934 +0000 UTC Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.887766 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.887810 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.887820 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.887841 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.887856 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:37Z","lastTransitionTime":"2026-02-03T00:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.907409 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.907527 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:15:37 crc kubenswrapper[4798]: E0203 00:15:37.907606 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:15:37 crc kubenswrapper[4798]: E0203 00:15:37.907816 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.990716 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.990771 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.990784 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.990804 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:37 crc kubenswrapper[4798]: I0203 00:15:37.990819 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:37Z","lastTransitionTime":"2026-02-03T00:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.093937 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.094016 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.094040 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.094068 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.094090 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:38Z","lastTransitionTime":"2026-02-03T00:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.166005 4798 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.166588 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.197959 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.198298 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.198566 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.198682 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.198716 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.198733 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:38Z","lastTransitionTime":"2026-02-03T00:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.221011 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:38Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.236758 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9f9dfba8a37fa4a39f59f8b4a7cdc9c8493d91a2aef18275fae7d7989f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:38Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.255243 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:38Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.273078 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bb16c3599d6149d6981f97d083ee866ab1f157a08b1ba9825442920b5af8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:38Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.287433 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8mqs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a80722df-c977-49e0-b1ec-a83fea1c4f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6c76bd68818c8e42c934e6a93bcffb93289d3993a975f8490ce5dc2d7db601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwh9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8mqs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:38Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.302044 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.302110 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.302120 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.302135 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.302147 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:38Z","lastTransitionTime":"2026-02-03T00:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.318139 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d9827e9-9254-47d3-8104-063cecf114b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6167533a6a616c137b890a0878d4ca54de45315ddf8799777a78b065f287f8f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5aaaf6e417770868b0d07795a8dce7a762fff69be8f67fa8b18f2d725b3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://174155cc26bc82aa227742b9171a1bdc7e162fc52f673214b383d6fe29d6171e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78de22de18e933207990153b342930848b80ccb2a9f4111a17e195a988d4b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5985e1d80e9ba103ce9794b795647710c64f0bdf9f6b6c7ca7e3b1cc4bd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:38Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.335702 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84295e9a-e49f-4550-8b66-c13c1d9a31ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e69941f257b4b15956eceee179fe4480d582215ae076f2757f5ead357c5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a58ca61528f9b3bf28c062c16d1b924e68f3a7bc60438abbb9e6d726023b713b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75e6cbd2a5c8ecdfeb505388b2c00877ab436a5434a9ffb2c15fd1a6cbde36b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b848412f3f729b0be1246c7aa6d61d8a64e93fdbdaa5899e447dd24adb39f185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b416e7549939712a710931ff8eeed3b360c4a52f48ca3e003d48be6a9dde5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:38Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.350982 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4nx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f968882a-8e40-4760-b1fe-2d456390d30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a047e7a57cedd76d9c1585aa1a50f66fc1ace45b59f9d32e284d44c940a79f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swvtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4nx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:38Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.365888 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6602c86-f236-4772-b70f-a8b4847b95dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363ed58e5d7ec642280fa4eeb3b84af899499279e5cca2f44a3762721b45758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a062edfbf637aa7eed0473b03b10011ed3e0b8647ca701c9b67bd16abe6fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b842j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:38Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.383162 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60953d56-8dc2-4adf-96bc-078a558476e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8504c36033083a61ef0f41fbba999a505d9d387e5c23264a35125fd50e69bf2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nhpkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:38Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.404454 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.404504 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.404517 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.404538 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.404551 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:38Z","lastTransitionTime":"2026-02-03T00:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.406975 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71790a2-e390-400a-a288-2a3af8467047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21f82fb24c3cc8120b8961e97b13d0ac424998a4e57ccaeecf00afd91f1fdec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gzlj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:38Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.423761 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:38Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.441575 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da2defc8ae8b05759890f95063d7f6820ae261639188fbf19290f98cfcc080de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5cc605f6582cf6832be9a756695825774f5ce389556957d9769286d0e3640f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:38Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.453842 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad68d98-2014-4595-bfa1-3c2d01ea9c1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ee0d4bdb8f37f7bbb7a130615b9df7a612fbcc74aadfb5eeb319935d50616d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da94f34ed139edbb39d3e777c72192e4d6d444b191e109b4c72ccb945513a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe6f4f8a090678c9eb76141cc1896ce9c032c026e768c884350cb64dace63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a3d5b07a894ad5cb3895f0d7890997de76f76e5dd6e4cf0d09096420e17a2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:38Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.466700 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ktf4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"106da5aa-5f2e-4d32-b172-4844ad6de7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f51a4a2b3b1ac421d5524a3776981e28eea2f2ae9d1e78977c96cc2f0faa894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4dpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ktf4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:38Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.507614 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.507687 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.507698 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.507714 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.507725 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:38Z","lastTransitionTime":"2026-02-03T00:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.610764 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.610817 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.610844 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.610863 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.610884 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:38Z","lastTransitionTime":"2026-02-03T00:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.714021 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.714076 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.714091 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.714112 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.714127 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:38Z","lastTransitionTime":"2026-02-03T00:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.817571 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.817640 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.817669 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.817689 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.817702 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:38Z","lastTransitionTime":"2026-02-03T00:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.862496 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 16:21:36.336157281 +0000 UTC Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.907619 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:15:38 crc kubenswrapper[4798]: E0203 00:15:38.907870 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.920950 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.920995 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.921004 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.921019 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.921064 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:38Z","lastTransitionTime":"2026-02-03T00:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.929506 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bb16c3599d6149d6981f97d083ee866ab1f157a08b1ba9825442920b5af8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:38Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.944146 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8mqs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a80722df-c977-49e0-b1ec-a83fea1c4f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6c76bd68818c8e42c934e6a93bcffb93289d3993a975f8490ce5dc2d7db601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwh9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8mqs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:38Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.980632 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d9827e9-9254-47d3-8104-063cecf114b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6167533a6a616c137b890a0878d4ca54de45315ddf8799777a78b065f287f8f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5aaaf6e417770868b0d07795a8dce7a762fff69be8f67fa8b18f2d725b3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://174155cc26bc82aa227742b9171a1bdc7e162fc52f673214b383d6fe29d6171e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78de22de18e933207990153b342930848b80ccb2a9f4111a17e195a988d4b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5985e1d80e9ba103ce9794b795647710c64f0bdf9f6b6c7ca7e3b1cc4bd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:38Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:38 crc kubenswrapper[4798]: I0203 00:15:38.999060 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84295e9a-e49f-4550-8b66-c13c1d9a31ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e69941f257b4b15956eceee179fe4480d582215ae076f2757f5ead357c5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a58ca61528f9b3bf28c062c16d1b924e68f3a7bc60438abbb9e6d726023b713b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75e6cbd2a5c8ecdfeb505388b2c00877ab436a5434a9ffb2c15fd1a6cbde36b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b848412f3f729b0be1246c7aa6d61d8a64e93fdbdaa5899e447dd24adb39f185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b416e7549939712a710931ff8eeed3b360c4a52f48ca3e003d48be6a9dde5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:38Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.019040 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da2defc8ae8b05759890f95063d7f6820ae261639188fbf19290f98cfcc080de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5cc605f6582cf6832be9a756695825774f5ce389556957d9769286d0e3640f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:39Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.025472 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.025530 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.025547 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.025568 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.025583 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:39Z","lastTransitionTime":"2026-02-03T00:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.031071 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4nx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f968882a-8e40-4760-b1fe-2d456390d30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a047e7a57cedd76d9c1585aa1a50f66fc1ace45b59f9d32e284d44c940a79f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swvtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4nx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:39Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.046016 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6602c86-f236-4772-b70f-a8b4847b95dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363ed58e5d7ec642280fa4eeb3b84af899499279e5cca2f44a3762721b45758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a062edfbf637aa7eed0473b03b10011ed3e0b8647ca701c9b67bd16abe6fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b842j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:39Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.063193 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60953d56-8dc2-4adf-96bc-078a558476e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8504c36033083a61ef0f41fbba999a505d9d387e5c23264a35125fd50e69bf2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nhpkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:39Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.085182 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71790a2-e390-400a-a288-2a3af8467047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21f82fb24c3cc8120b8961e97b13d0ac424998a4e57ccaeecf00afd91f1fdec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gzlj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:39Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.099764 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:39Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.115340 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad68d98-2014-4595-bfa1-3c2d01ea9c1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ee0d4bdb8f37f7bbb7a130615b9df7a612fbcc74aadfb5eeb319935d50616d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da94f34ed139edbb39d3e777c72192e4d6d444b191e109b4c72ccb945513a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe6f4f8a090678c9eb76141cc1896ce9c032c026e768c884350cb64dace63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a3d5b07a894ad5cb3895f0d7890997de76f76e5dd6e4cf0d09096420e17a2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:39Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.128243 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.128356 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.128367 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.128384 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.128395 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:39Z","lastTransitionTime":"2026-02-03T00:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.129220 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ktf4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"106da5aa-5f2e-4d32-b172-4844ad6de7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f51a4a2b3b1ac421d5524a3776981e28eea2f2ae9d1e78977c96cc2f0faa894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4dpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ktf4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:39Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.143746 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:39Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.156732 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:39Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.169902 4798 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.169984 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9f9dfba8a37fa4a39f59f8b4a7cdc9c8493d91a2aef18275fae7d7989f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:39Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.230531 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.230585 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.230600 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.230623 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.230638 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:39Z","lastTransitionTime":"2026-02-03T00:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.334179 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.334947 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.335042 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.335081 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.335111 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:39Z","lastTransitionTime":"2026-02-03T00:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.438929 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.438990 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.439001 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.439018 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.439030 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:39Z","lastTransitionTime":"2026-02-03T00:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.541626 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.541682 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.541701 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.541723 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.541732 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:39Z","lastTransitionTime":"2026-02-03T00:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.644169 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.644209 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.644218 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.644232 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.644243 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:39Z","lastTransitionTime":"2026-02-03T00:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.746757 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.746807 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.746818 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.746836 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.746849 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:39Z","lastTransitionTime":"2026-02-03T00:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.853047 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.853154 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.853262 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.853287 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.853306 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:39Z","lastTransitionTime":"2026-02-03T00:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.863530 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 20:28:11.944667162 +0000 UTC Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.907880 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.908087 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:15:39 crc kubenswrapper[4798]: E0203 00:15:39.908140 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:15:39 crc kubenswrapper[4798]: E0203 00:15:39.908298 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.955886 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.955930 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.955940 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.955961 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:39 crc kubenswrapper[4798]: I0203 00:15:39.955973 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:39Z","lastTransitionTime":"2026-02-03T00:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.058534 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.058618 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.058636 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.058679 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.058694 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:40Z","lastTransitionTime":"2026-02-03T00:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.161322 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.161376 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.161395 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.161423 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.161441 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:40Z","lastTransitionTime":"2026-02-03T00:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.172780 4798 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.265027 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.265082 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.265099 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.265120 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.265137 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:40Z","lastTransitionTime":"2026-02-03T00:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.362012 4798 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.367909 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.367962 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.367976 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.367999 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.368015 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:40Z","lastTransitionTime":"2026-02-03T00:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.471629 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.471711 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.471738 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.471763 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.471781 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:40Z","lastTransitionTime":"2026-02-03T00:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.575532 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.575592 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.575604 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.575625 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.575640 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:40Z","lastTransitionTime":"2026-02-03T00:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.679473 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.679531 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.679552 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.679584 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.679601 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:40Z","lastTransitionTime":"2026-02-03T00:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.782703 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.782747 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.782758 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.782781 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.782796 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:40Z","lastTransitionTime":"2026-02-03T00:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.864884 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 21:08:00.22230573 +0000 UTC Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.886014 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.886071 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.886085 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.886107 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.886122 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:40Z","lastTransitionTime":"2026-02-03T00:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.907849 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:15:40 crc kubenswrapper[4798]: E0203 00:15:40.908077 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.989429 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.989492 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.989510 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.989534 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:40 crc kubenswrapper[4798]: I0203 00:15:40.989554 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:40Z","lastTransitionTime":"2026-02-03T00:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.093278 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.093335 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.093353 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.093380 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.093397 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:41Z","lastTransitionTime":"2026-02-03T00:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.178342 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gzlj4_b71790a2-e390-400a-a288-2a3af8467047/ovnkube-controller/0.log" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.182104 4798 generic.go:334] "Generic (PLEG): container finished" podID="b71790a2-e390-400a-a288-2a3af8467047" containerID="21f82fb24c3cc8120b8961e97b13d0ac424998a4e57ccaeecf00afd91f1fdec2" exitCode=1 Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.182181 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" event={"ID":"b71790a2-e390-400a-a288-2a3af8467047","Type":"ContainerDied","Data":"21f82fb24c3cc8120b8961e97b13d0ac424998a4e57ccaeecf00afd91f1fdec2"} Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.183495 4798 scope.go:117] "RemoveContainer" containerID="21f82fb24c3cc8120b8961e97b13d0ac424998a4e57ccaeecf00afd91f1fdec2" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.195458 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.195504 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.195514 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.195533 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.195552 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:41Z","lastTransitionTime":"2026-02-03T00:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.206167 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:41Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.225977 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da2defc8ae8b05759890f95063d7f6820ae261639188fbf19290f98cfcc080de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5cc605f6582cf6832be9a756695825774f5ce389556957d9769286d0e3640f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:41Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.243732 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4nx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f968882a-8e40-4760-b1fe-2d456390d30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a047e7a57cedd76d9c1585aa1a50f66fc1ace45b59f9d32e284d44c940a79f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swvtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4nx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:41Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.261193 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6602c86-f236-4772-b70f-a8b4847b95dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363ed58e5d7ec642280fa4eeb3b84af899499279e5cca2f44a3762721b45758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a062edfbf637aa7eed0473b03b10011ed3e0b8647ca701c9b67bd16abe6fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b842j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:41Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.279962 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60953d56-8dc2-4adf-96bc-078a558476e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8504c36033083a61ef0f41fbba999a505d9d387e5c23264a35125fd50e69bf2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nhpkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:41Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.298168 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.298222 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.298235 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.298257 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.298270 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:41Z","lastTransitionTime":"2026-02-03T00:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.307081 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71790a2-e390-400a-a288-2a3af8467047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21f82fb24c3cc8120b8961e97b13d0ac424998a4e57ccaeecf00afd91f1fdec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21f82fb24c3cc8120b8961e97b13d0ac424998a4e57ccaeecf00afd91f1fdec2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T00:15:40Z\\\",\\\"message\\\":\\\"oval\\\\nI0203 00:15:40.535323 6074 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0203 00:15:40.535374 6074 handler.go:208] Removed *v1.Node event handler 7\\\\nI0203 00:15:40.537308 6074 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0203 00:15:40.537336 6074 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0203 00:15:40.537349 6074 handler.go:208] Removed *v1.Node event handler 2\\\\nI0203 00:15:40.539280 6074 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0203 00:15:40.539342 6074 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0203 00:15:40.539350 6074 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0203 00:15:40.539376 6074 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0203 00:15:40.539453 6074 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0203 00:15:40.539486 6074 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0203 00:15:40.539496 6074 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0203 00:15:40.539503 6074 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0203 00:15:40.539510 6074 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0203 00:15:40.539516 6074 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0203 00:15:40.540297 6074 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gzlj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:41Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.321020 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad68d98-2014-4595-bfa1-3c2d01ea9c1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ee0d4bdb8f37f7bbb7a130615b9df7a612fbcc74aadfb5eeb319935d50616d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da94f34ed139edbb39d3e777c72192e4d6d444b191e109b4c72ccb945513a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe6f4f8a090678c9eb76141cc1896ce9c032c026e768c884350cb64dace63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a3d5b07a894ad5cb3895f0d7890997de76f76e5dd6e4cf0d09096420e17a2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:41Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.338643 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ktf4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"106da5aa-5f2e-4d32-b172-4844ad6de7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f51a4a2b3b1ac421d5524a3776981e28eea2f2ae9d1e78977c96cc2f0faa894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4dpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ktf4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:41Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.351783 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:41Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.364712 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9f9dfba8a37fa4a39f59f8b4a7cdc9c8493d91a2aef18275fae7d7989f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:41Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.388383 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:41Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.400620 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.400678 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.400690 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.400706 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.400720 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:41Z","lastTransitionTime":"2026-02-03T00:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.415889 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d9827e9-9254-47d3-8104-063cecf114b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6167533a6a616c137b890a0878d4ca54de45315ddf8799777a78b065f287f8f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5aaaf6e417770868b0d07795a8dce7a762fff69be8f67fa8b18f2d725b3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://174155cc26bc82aa227742b9171a1bdc7e162fc52f673214b383d6fe29d6171e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78de22de18e933207990153b342930848b80ccb2a9f4111a17e195a988d4b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5985e1d80e9ba103ce9794b795647710c64f0bdf9f6b6c7ca7e3b1cc4bd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:41Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.433316 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84295e9a-e49f-4550-8b66-c13c1d9a31ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e69941f257b4b15956eceee179fe4480d582215ae076f2757f5ead357c5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a58ca61528f9b3bf28c062c16d1b924e68f3a7bc60438abbb9e6d726023b713b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75e6cbd2a5c8ecdfeb505388b2c00877ab436a5434a9ffb2c15fd1a6cbde36b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b848412f3f729b0be1246c7aa6d61d8a64e93fdbdaa5899e447dd24adb39f185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b416e7549939712a710931ff8eeed3b360c4a52f48ca3e003d48be6a9dde5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:41Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.449206 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bb16c3599d6149d6981f97d083ee866ab1f157a08b1ba9825442920b5af8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:41Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.460555 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8mqs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a80722df-c977-49e0-b1ec-a83fea1c4f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6c76bd68818c8e42c934e6a93bcffb93289d3993a975f8490ce5dc2d7db601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwh9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8mqs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:41Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.502774 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.502812 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.502821 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.502836 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.502848 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:41Z","lastTransitionTime":"2026-02-03T00:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.605727 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.605779 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.605794 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.605815 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.605830 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:41Z","lastTransitionTime":"2026-02-03T00:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.709202 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.709263 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.709279 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.709301 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.709314 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:41Z","lastTransitionTime":"2026-02-03T00:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.812218 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.812257 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.812268 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.812284 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.812297 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:41Z","lastTransitionTime":"2026-02-03T00:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.865306 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 19:30:04.697939994 +0000 UTC Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.907440 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.907486 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:15:41 crc kubenswrapper[4798]: E0203 00:15:41.907581 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:15:41 crc kubenswrapper[4798]: E0203 00:15:41.907770 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.914456 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.914498 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.914513 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.914537 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:41 crc kubenswrapper[4798]: I0203 00:15:41.914548 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:41Z","lastTransitionTime":"2026-02-03T00:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.017579 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.017691 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.017766 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.017799 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.017818 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:42Z","lastTransitionTime":"2026-02-03T00:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.121322 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.121403 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.121439 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.121468 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.121493 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:42Z","lastTransitionTime":"2026-02-03T00:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.127999 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j6wfm"] Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.128980 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j6wfm" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.130778 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.131264 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.150442 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84295e9a-e49f-4550-8b66-c13c1d9a31ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e69941f257b4b15956eceee179fe4480d582215ae076f2757f5ead357c5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a58ca61528f9b3bf28c062c16d1b924e68f3a7bc60438abbb9e6d726023b713b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75e6cbd2a5c8ecdfeb505388b2c00877ab436a5434a9ffb2c15fd1a6cbde36b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b848412f3f729b0be1246c7aa6d61d8a64e93fdbdaa5899e447dd24adb39f185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b416e7549939712a710931ff8eeed3b360c4a52f48ca3e003d48be6a9dde5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:42Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.169208 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bb16c3599d6149d6981f97d083ee866ab1f157a08b1ba9825442920b5af8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:42Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.179297 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/db2cbb1b-24c0-405a-ae34-2c1b4af6f999-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-j6wfm\" (UID: \"db2cbb1b-24c0-405a-ae34-2c1b4af6f999\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j6wfm" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.179356 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/db2cbb1b-24c0-405a-ae34-2c1b4af6f999-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-j6wfm\" (UID: \"db2cbb1b-24c0-405a-ae34-2c1b4af6f999\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j6wfm" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.179398 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/db2cbb1b-24c0-405a-ae34-2c1b4af6f999-env-overrides\") pod \"ovnkube-control-plane-749d76644c-j6wfm\" (UID: \"db2cbb1b-24c0-405a-ae34-2c1b4af6f999\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j6wfm" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.179574 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89dk4\" (UniqueName: \"kubernetes.io/projected/db2cbb1b-24c0-405a-ae34-2c1b4af6f999-kube-api-access-89dk4\") pod \"ovnkube-control-plane-749d76644c-j6wfm\" (UID: \"db2cbb1b-24c0-405a-ae34-2c1b4af6f999\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j6wfm" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.184504 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8mqs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a80722df-c977-49e0-b1ec-a83fea1c4f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6c76bd68818c8e42c934e6a93bcffb93289d3993a975f8490ce5dc2d7db601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwh9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8mqs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:42Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.217379 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d9827e9-9254-47d3-8104-063cecf114b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6167533a6a616c137b890a0878d4ca54de45315ddf8799777a78b065f287f8f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5aaaf6e417770868b0d07795a8dce7a762fff69be8f67fa8b18f2d725b3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://174155cc26bc82aa227742b9171a1bdc7e162fc52f673214b383d6fe29d6171e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78de22de18e933207990153b342930848b80ccb2a9f4111a17e195a988d4b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5985e1d80e9ba103ce9794b795647710c64f0bdf9f6b6c7ca7e3b1cc4bd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:42Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.224999 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.225067 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.225083 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.225106 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.225123 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:42Z","lastTransitionTime":"2026-02-03T00:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.237951 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:42Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.259418 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da2defc8ae8b05759890f95063d7f6820ae261639188fbf19290f98cfcc080de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5cc605f6582cf6832be9a756695825774f5ce389556957d9769286d0e3640f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:42Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.273382 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4nx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f968882a-8e40-4760-b1fe-2d456390d30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a047e7a57cedd76d9c1585aa1a50f66fc1ace45b59f9d32e284d44c940a79f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swvtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4nx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:42Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.280361 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/db2cbb1b-24c0-405a-ae34-2c1b4af6f999-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-j6wfm\" (UID: \"db2cbb1b-24c0-405a-ae34-2c1b4af6f999\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j6wfm" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.280793 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/db2cbb1b-24c0-405a-ae34-2c1b4af6f999-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-j6wfm\" (UID: \"db2cbb1b-24c0-405a-ae34-2c1b4af6f999\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j6wfm" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.280835 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/db2cbb1b-24c0-405a-ae34-2c1b4af6f999-env-overrides\") pod \"ovnkube-control-plane-749d76644c-j6wfm\" (UID: \"db2cbb1b-24c0-405a-ae34-2c1b4af6f999\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j6wfm" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.280919 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89dk4\" (UniqueName: \"kubernetes.io/projected/db2cbb1b-24c0-405a-ae34-2c1b4af6f999-kube-api-access-89dk4\") pod \"ovnkube-control-plane-749d76644c-j6wfm\" (UID: \"db2cbb1b-24c0-405a-ae34-2c1b4af6f999\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j6wfm" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.281715 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/db2cbb1b-24c0-405a-ae34-2c1b4af6f999-env-overrides\") pod \"ovnkube-control-plane-749d76644c-j6wfm\" (UID: \"db2cbb1b-24c0-405a-ae34-2c1b4af6f999\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j6wfm" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.281779 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/db2cbb1b-24c0-405a-ae34-2c1b4af6f999-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-j6wfm\" (UID: \"db2cbb1b-24c0-405a-ae34-2c1b4af6f999\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j6wfm" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.289219 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6602c86-f236-4772-b70f-a8b4847b95dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363ed58e5d7ec642280fa4eeb3b84af899499279e5cca2f44a3762721b45758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a062edfbf637aa7eed0473b03b10011ed3e0b8647ca701c9b67bd16abe6fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b842j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:42Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.291075 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/db2cbb1b-24c0-405a-ae34-2c1b4af6f999-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-j6wfm\" (UID: \"db2cbb1b-24c0-405a-ae34-2c1b4af6f999\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j6wfm" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.309562 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89dk4\" (UniqueName: \"kubernetes.io/projected/db2cbb1b-24c0-405a-ae34-2c1b4af6f999-kube-api-access-89dk4\") pod \"ovnkube-control-plane-749d76644c-j6wfm\" (UID: \"db2cbb1b-24c0-405a-ae34-2c1b4af6f999\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j6wfm" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.311860 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60953d56-8dc2-4adf-96bc-078a558476e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8504c36033083a61ef0f41fbba999a505d9d387e5c23264a35125fd50e69bf2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nhpkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:42Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.328892 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.328967 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.328986 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.329013 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.329032 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:42Z","lastTransitionTime":"2026-02-03T00:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.344146 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71790a2-e390-400a-a288-2a3af8467047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21f82fb24c3cc8120b8961e97b13d0ac424998a4e57ccaeecf00afd91f1fdec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21f82fb24c3cc8120b8961e97b13d0ac424998a4e57ccaeecf00afd91f1fdec2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T00:15:40Z\\\",\\\"message\\\":\\\"oval\\\\nI0203 00:15:40.535323 6074 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0203 00:15:40.535374 6074 handler.go:208] Removed *v1.Node event handler 7\\\\nI0203 00:15:40.537308 6074 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0203 00:15:40.537336 6074 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0203 00:15:40.537349 6074 handler.go:208] Removed *v1.Node event handler 2\\\\nI0203 00:15:40.539280 6074 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0203 00:15:40.539342 6074 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0203 00:15:40.539350 6074 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0203 00:15:40.539376 6074 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0203 00:15:40.539453 6074 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0203 00:15:40.539486 6074 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0203 00:15:40.539496 6074 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0203 00:15:40.539503 6074 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0203 00:15:40.539510 6074 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0203 00:15:40.539516 6074 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0203 00:15:40.540297 6074 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gzlj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:42Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.366099 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ktf4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"106da5aa-5f2e-4d32-b172-4844ad6de7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f51a4a2b3b1ac421d5524a3776981e28eea2f2ae9d1e78977c96cc2f0faa894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4dpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ktf4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:42Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.382721 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j6wfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db2cbb1b-24c0-405a-ae34-2c1b4af6f999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89dk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89dk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j6wfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:42Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.398755 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad68d98-2014-4595-bfa1-3c2d01ea9c1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ee0d4bdb8f37f7bbb7a130615b9df7a612fbcc74aadfb5eeb319935d50616d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da94f34ed139edbb39d3e777c72192e4d6d444b191e109b4c72ccb945513a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe6f4f8a090678c9eb76141cc1896ce9c032c026e768c884350cb64dace63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a3d5b07a894ad5cb3895f0d7890997de76f76e5dd6e4cf0d09096420e17a2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:42Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.414736 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9f9dfba8a37fa4a39f59f8b4a7cdc9c8493d91a2aef18275fae7d7989f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:42Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.436411 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:42Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.438191 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.438228 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.438243 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.438262 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.438276 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:42Z","lastTransitionTime":"2026-02-03T00:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.450225 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j6wfm" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.452250 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:42Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:42 crc kubenswrapper[4798]: W0203 00:15:42.469346 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb2cbb1b_24c0_405a_ae34_2c1b4af6f999.slice/crio-b7804f9b7912dd8353bb6fece7c105df359d4be2adf81b51cab4ea3a9d0317be WatchSource:0}: Error finding container b7804f9b7912dd8353bb6fece7c105df359d4be2adf81b51cab4ea3a9d0317be: Status 404 returned error can't find the container with id b7804f9b7912dd8353bb6fece7c105df359d4be2adf81b51cab4ea3a9d0317be Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.541382 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.541428 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.541437 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.541455 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.541466 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:42Z","lastTransitionTime":"2026-02-03T00:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.644745 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.644804 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.644815 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.644834 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.644846 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:42Z","lastTransitionTime":"2026-02-03T00:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.747419 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.747506 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.747522 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.747546 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.747566 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:42Z","lastTransitionTime":"2026-02-03T00:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.850545 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.850613 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.850626 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.850646 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.850728 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:42Z","lastTransitionTime":"2026-02-03T00:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.866027 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 21:50:29.060889442 +0000 UTC Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.908326 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:15:42 crc kubenswrapper[4798]: E0203 00:15:42.908472 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.954002 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.954112 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.954130 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.954152 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:42 crc kubenswrapper[4798]: I0203 00:15:42.954168 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:42Z","lastTransitionTime":"2026-02-03T00:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.057911 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.058261 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.058341 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.058408 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.058477 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:43Z","lastTransitionTime":"2026-02-03T00:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.162598 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.162698 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.162717 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.162743 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.162762 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:43Z","lastTransitionTime":"2026-02-03T00:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.194853 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gzlj4_b71790a2-e390-400a-a288-2a3af8467047/ovnkube-controller/0.log" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.199556 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" event={"ID":"b71790a2-e390-400a-a288-2a3af8467047","Type":"ContainerStarted","Data":"a50646fe0204516756577642b1e5dfe13e0e71070560b2add7b44f34c610b11c"} Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.199812 4798 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.203001 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j6wfm" event={"ID":"db2cbb1b-24c0-405a-ae34-2c1b4af6f999","Type":"ContainerStarted","Data":"a1b3ce70bb8db498a796258ba1561e3e7fc260a82a2649e2e1a3e4132bcfe4d1"} Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.203036 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j6wfm" event={"ID":"db2cbb1b-24c0-405a-ae34-2c1b4af6f999","Type":"ContainerStarted","Data":"b7804f9b7912dd8353bb6fece7c105df359d4be2adf81b51cab4ea3a9d0317be"} Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.224829 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad68d98-2014-4595-bfa1-3c2d01ea9c1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ee0d4bdb8f37f7bbb7a130615b9df7a612fbcc74aadfb5eeb319935d50616d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da94f34ed139edbb39d3e777c72192e4d6d444b191e109b4c72ccb945513a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe6f4f8a090678c9eb76141cc1896ce9c032c026e768c884350cb64dace63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a3d5b07a894ad5cb3895f0d7890997de76f76e5dd6e4cf0d09096420e17a2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:43Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.246333 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ktf4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"106da5aa-5f2e-4d32-b172-4844ad6de7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f51a4a2b3b1ac421d5524a3776981e28eea2f2ae9d1e78977c96cc2f0faa894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4dpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ktf4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:43Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.265347 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j6wfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db2cbb1b-24c0-405a-ae34-2c1b4af6f999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89dk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89dk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j6wfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:43Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.265389 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.265933 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.265950 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.265968 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.265982 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:43Z","lastTransitionTime":"2026-02-03T00:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.283643 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:43Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.296428 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9f9dfba8a37fa4a39f59f8b4a7cdc9c8493d91a2aef18275fae7d7989f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:43Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.311903 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:43Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.336402 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d9827e9-9254-47d3-8104-063cecf114b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6167533a6a616c137b890a0878d4ca54de45315ddf8799777a78b065f287f8f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5aaaf6e417770868b0d07795a8dce7a762fff69be8f67fa8b18f2d725b3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://174155cc26bc82aa227742b9171a1bdc7e162fc52f673214b383d6fe29d6171e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78de22de18e933207990153b342930848b80ccb2a9f4111a17e195a988d4b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5985e1d80e9ba103ce9794b795647710c64f0bdf9f6b6c7ca7e3b1cc4bd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:43Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.347615 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84295e9a-e49f-4550-8b66-c13c1d9a31ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e69941f257b4b15956eceee179fe4480d582215ae076f2757f5ead357c5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a58ca61528f9b3bf28c062c16d1b924e68f3a7bc60438abbb9e6d726023b713b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75e6cbd2a5c8ecdfeb505388b2c00877ab436a5434a9ffb2c15fd1a6cbde36b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b848412f3f729b0be1246c7aa6d61d8a64e93fdbdaa5899e447dd24adb39f185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b416e7549939712a710931ff8eeed3b360c4a52f48ca3e003d48be6a9dde5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:43Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.362139 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bb16c3599d6149d6981f97d083ee866ab1f157a08b1ba9825442920b5af8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:43Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.368934 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.368997 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.369023 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.369056 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.369083 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:43Z","lastTransitionTime":"2026-02-03T00:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.372363 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8mqs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a80722df-c977-49e0-b1ec-a83fea1c4f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6c76bd68818c8e42c934e6a93bcffb93289d3993a975f8490ce5dc2d7db601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwh9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8mqs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:43Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.385525 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:43Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.398505 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da2defc8ae8b05759890f95063d7f6820ae261639188fbf19290f98cfcc080de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5cc605f6582cf6832be9a756695825774f5ce389556957d9769286d0e3640f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:43Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.408706 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4nx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f968882a-8e40-4760-b1fe-2d456390d30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a047e7a57cedd76d9c1585aa1a50f66fc1ace45b59f9d32e284d44c940a79f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swvtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4nx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:43Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.431448 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6602c86-f236-4772-b70f-a8b4847b95dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363ed58e5d7ec642280fa4eeb3b84af899499279e5cca2f44a3762721b45758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a062edfbf637aa7eed0473b03b10011ed3e0b8647ca701c9b67bd16abe6fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b842j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:43Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.450668 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60953d56-8dc2-4adf-96bc-078a558476e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8504c36033083a61ef0f41fbba999a505d9d387e5c23264a35125fd50e69bf2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nhpkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:43Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.471575 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.471612 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.471621 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.471635 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.471647 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:43Z","lastTransitionTime":"2026-02-03T00:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.477883 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71790a2-e390-400a-a288-2a3af8467047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a50646fe0204516756577642b1e5dfe13e0e71070560b2add7b44f34c610b11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21f82fb24c3cc8120b8961e97b13d0ac424998a4e57ccaeecf00afd91f1fdec2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T00:15:40Z\\\",\\\"message\\\":\\\"oval\\\\nI0203 00:15:40.535323 6074 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0203 00:15:40.535374 6074 handler.go:208] Removed *v1.Node event handler 7\\\\nI0203 00:15:40.537308 6074 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0203 00:15:40.537336 6074 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0203 00:15:40.537349 6074 handler.go:208] Removed *v1.Node event handler 2\\\\nI0203 00:15:40.539280 6074 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0203 00:15:40.539342 6074 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0203 00:15:40.539350 6074 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0203 00:15:40.539376 6074 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0203 00:15:40.539453 6074 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0203 00:15:40.539486 6074 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0203 00:15:40.539496 6074 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0203 00:15:40.539503 6074 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0203 00:15:40.539510 6074 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0203 00:15:40.539516 6074 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0203 00:15:40.540297 6074 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gzlj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:43Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.574744 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.574797 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.574816 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.574840 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.574859 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:43Z","lastTransitionTime":"2026-02-03T00:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.652078 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-hzk9m"] Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.652731 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:15:43 crc kubenswrapper[4798]: E0203 00:15:43.652800 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.677735 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.677787 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.677800 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.677820 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.677833 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:43Z","lastTransitionTime":"2026-02-03T00:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.679450 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71790a2-e390-400a-a288-2a3af8467047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a50646fe0204516756577642b1e5dfe13e0e71070560b2add7b44f34c610b11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21f82fb24c3cc8120b8961e97b13d0ac424998a4e57ccaeecf00afd91f1fdec2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T00:15:40Z\\\",\\\"message\\\":\\\"oval\\\\nI0203 00:15:40.535323 6074 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0203 00:15:40.535374 6074 handler.go:208] Removed *v1.Node event handler 7\\\\nI0203 00:15:40.537308 6074 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0203 00:15:40.537336 6074 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0203 00:15:40.537349 6074 handler.go:208] Removed *v1.Node event handler 2\\\\nI0203 00:15:40.539280 6074 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0203 00:15:40.539342 6074 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0203 00:15:40.539350 6074 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0203 00:15:40.539376 6074 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0203 00:15:40.539453 6074 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0203 00:15:40.539486 6074 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0203 00:15:40.539496 6074 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0203 00:15:40.539503 6074 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0203 00:15:40.539510 6074 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0203 00:15:40.539516 6074 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0203 00:15:40.540297 6074 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gzlj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:43Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.695045 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:43Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.695246 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/039e204d-4d36-471e-990f-4eb5b4a193fc-metrics-certs\") pod \"network-metrics-daemon-hzk9m\" (UID: \"039e204d-4d36-471e-990f-4eb5b4a193fc\") " pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.695285 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4pnx\" (UniqueName: \"kubernetes.io/projected/039e204d-4d36-471e-990f-4eb5b4a193fc-kube-api-access-w4pnx\") pod \"network-metrics-daemon-hzk9m\" (UID: \"039e204d-4d36-471e-990f-4eb5b4a193fc\") " pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.710122 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da2defc8ae8b05759890f95063d7f6820ae261639188fbf19290f98cfcc080de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5cc605f6582cf6832be9a756695825774f5ce389556957d9769286d0e3640f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:43Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.721046 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4nx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f968882a-8e40-4760-b1fe-2d456390d30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a047e7a57cedd76d9c1585aa1a50f66fc1ace45b59f9d32e284d44c940a79f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swvtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4nx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:43Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.753138 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6602c86-f236-4772-b70f-a8b4847b95dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363ed58e5d7ec642280fa4eeb3b84af899499279e5cca2f44a3762721b45758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a062edfbf637aa7eed0473b03b10011ed3e0b8647ca701c9b67bd16abe6fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b842j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:43Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.780136 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.780178 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.780187 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.780204 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.780224 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:43Z","lastTransitionTime":"2026-02-03T00:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.781300 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60953d56-8dc2-4adf-96bc-078a558476e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8504c36033083a61ef0f41fbba999a505d9d387e5c23264a35125fd50e69bf2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nhpkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:43Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.796677 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:15:43 crc kubenswrapper[4798]: E0203 00:15:43.796828 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:15:59.796800697 +0000 UTC m=+51.562790728 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.796874 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.796946 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/039e204d-4d36-471e-990f-4eb5b4a193fc-metrics-certs\") pod \"network-metrics-daemon-hzk9m\" (UID: \"039e204d-4d36-471e-990f-4eb5b4a193fc\") " pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.796982 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4pnx\" (UniqueName: \"kubernetes.io/projected/039e204d-4d36-471e-990f-4eb5b4a193fc-kube-api-access-w4pnx\") pod \"network-metrics-daemon-hzk9m\" (UID: \"039e204d-4d36-471e-990f-4eb5b4a193fc\") " pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.797007 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:15:43 crc kubenswrapper[4798]: E0203 00:15:43.797150 4798 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 00:15:43 crc kubenswrapper[4798]: E0203 00:15:43.797190 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 00:15:59.797183225 +0000 UTC m=+51.563173236 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 00:15:43 crc kubenswrapper[4798]: E0203 00:15:43.797533 4798 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 00:15:43 crc kubenswrapper[4798]: E0203 00:15:43.797563 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 00:15:59.797556434 +0000 UTC m=+51.563546445 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 00:15:43 crc kubenswrapper[4798]: E0203 00:15:43.797600 4798 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 00:15:43 crc kubenswrapper[4798]: E0203 00:15:43.797620 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/039e204d-4d36-471e-990f-4eb5b4a193fc-metrics-certs podName:039e204d-4d36-471e-990f-4eb5b4a193fc nodeName:}" failed. No retries permitted until 2026-02-03 00:15:44.297612305 +0000 UTC m=+36.063602336 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/039e204d-4d36-471e-990f-4eb5b4a193fc-metrics-certs") pod "network-metrics-daemon-hzk9m" (UID: "039e204d-4d36-471e-990f-4eb5b4a193fc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.804374 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad68d98-2014-4595-bfa1-3c2d01ea9c1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ee0d4bdb8f37f7bbb7a130615b9df7a612fbcc74aadfb5eeb319935d50616d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da94f34ed139edbb39d3e777c72192e4d6d444b191e109b4c72ccb945513a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe6f4f8a090678c9eb76141cc1896ce9c032c026e768c884350cb64dace63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a3d5b07a894ad5cb3895f0d7890997de76f76e5dd6e4cf0d09096420e17a2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:43Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.815525 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4pnx\" (UniqueName: \"kubernetes.io/projected/039e204d-4d36-471e-990f-4eb5b4a193fc-kube-api-access-w4pnx\") pod \"network-metrics-daemon-hzk9m\" (UID: \"039e204d-4d36-471e-990f-4eb5b4a193fc\") " pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.821489 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ktf4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"106da5aa-5f2e-4d32-b172-4844ad6de7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f51a4a2b3b1ac421d5524a3776981e28eea2f2ae9d1e78977c96cc2f0faa894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4dpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ktf4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:43Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.837636 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j6wfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db2cbb1b-24c0-405a-ae34-2c1b4af6f999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89dk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89dk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j6wfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:43Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.850139 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hzk9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"039e204d-4d36-471e-990f-4eb5b4a193fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4pnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4pnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hzk9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:43Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.867021 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 15:14:22.758573302 +0000 UTC Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.867824 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:43Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.879526 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9f9dfba8a37fa4a39f59f8b4a7cdc9c8493d91a2aef18275fae7d7989f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:43Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.882040 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.882085 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.882094 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.882114 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.882124 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:43Z","lastTransitionTime":"2026-02-03T00:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.895605 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:43Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.897818 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.897893 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:15:43 crc kubenswrapper[4798]: E0203 00:15:43.898034 4798 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 00:15:43 crc kubenswrapper[4798]: E0203 00:15:43.898060 4798 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 00:15:43 crc kubenswrapper[4798]: E0203 00:15:43.898071 4798 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 00:15:43 crc kubenswrapper[4798]: E0203 00:15:43.898111 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 00:15:59.898097139 +0000 UTC m=+51.664087150 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 00:15:43 crc kubenswrapper[4798]: E0203 00:15:43.898381 4798 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 00:15:43 crc kubenswrapper[4798]: E0203 00:15:43.898404 4798 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 00:15:43 crc kubenswrapper[4798]: E0203 00:15:43.898415 4798 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 00:15:43 crc kubenswrapper[4798]: E0203 00:15:43.898467 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 00:15:59.898457027 +0000 UTC m=+51.664447038 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.907272 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.907311 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:15:43 crc kubenswrapper[4798]: E0203 00:15:43.907386 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:15:43 crc kubenswrapper[4798]: E0203 00:15:43.907530 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.914923 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d9827e9-9254-47d3-8104-063cecf114b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6167533a6a616c137b890a0878d4ca54de45315ddf8799777a78b065f287f8f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5aaaf6e417770868b0d07795a8dce7a762fff69be8f67fa8b18f2d725b3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://174155cc26bc82aa227742b9171a1bdc7e162fc52f673214b383d6fe29d6171e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78de22de18e933207990153b342930848b80ccb2a9f4111a17e195a988d4b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5985e1d80e9ba103ce9794b795647710c64f0bdf9f6b6c7ca7e3b1cc4bd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:43Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.928014 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84295e9a-e49f-4550-8b66-c13c1d9a31ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e69941f257b4b15956eceee179fe4480d582215ae076f2757f5ead357c5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a58ca61528f9b3bf28c062c16d1b924e68f3a7bc60438abbb9e6d726023b713b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75e6cbd2a5c8ecdfeb505388b2c00877ab436a5434a9ffb2c15fd1a6cbde36b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b848412f3f729b0be1246c7aa6d61d8a64e93fdbdaa5899e447dd24adb39f185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b416e7549939712a710931ff8eeed3b360c4a52f48ca3e003d48be6a9dde5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:43Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.939985 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bb16c3599d6149d6981f97d083ee866ab1f157a08b1ba9825442920b5af8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:43Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.949847 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8mqs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a80722df-c977-49e0-b1ec-a83fea1c4f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6c76bd68818c8e42c934e6a93bcffb93289d3993a975f8490ce5dc2d7db601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwh9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8mqs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:43Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.988927 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.988964 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.988974 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.988990 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:43 crc kubenswrapper[4798]: I0203 00:15:43.989001 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:43Z","lastTransitionTime":"2026-02-03T00:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.091613 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.091666 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.091679 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.091693 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.091704 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:44Z","lastTransitionTime":"2026-02-03T00:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.194467 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.194513 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.194522 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.194536 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.194547 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:44Z","lastTransitionTime":"2026-02-03T00:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.208112 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j6wfm" event={"ID":"db2cbb1b-24c0-405a-ae34-2c1b4af6f999","Type":"ContainerStarted","Data":"cd3875815c505e550007aa406ff23c56ac34176838363e4f894202ad69882389"} Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.225098 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad68d98-2014-4595-bfa1-3c2d01ea9c1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ee0d4bdb8f37f7bbb7a130615b9df7a612fbcc74aadfb5eeb319935d50616d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da94f34ed139edbb39d3e777c72192e4d6d444b191e109b4c72ccb945513a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe6f4f8a090678c9eb76141cc1896ce9c032c026e768c884350cb64dace63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a3d5b07a894ad5cb3895f0d7890997de76f76e5dd6e4cf0d09096420e17a2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:44Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.239826 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ktf4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"106da5aa-5f2e-4d32-b172-4844ad6de7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f51a4a2b3b1ac421d5524a3776981e28eea2f2ae9d1e78977c96cc2f0faa894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4dpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ktf4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:44Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.255144 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j6wfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db2cbb1b-24c0-405a-ae34-2c1b4af6f999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1b3ce70bb8db498a796258ba1561e3e7fc260a82a2649e2e1a3e4132bcfe4d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89dk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd3875815c505e550007aa406ff23c56ac34176838363e4f894202ad69882389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89dk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j6wfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:44Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.265974 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hzk9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"039e204d-4d36-471e-990f-4eb5b4a193fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4pnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4pnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hzk9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:44Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.281314 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:44Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.294677 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9f9dfba8a37fa4a39f59f8b4a7cdc9c8493d91a2aef18275fae7d7989f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:44Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.296783 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.296832 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.296848 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.296871 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.296887 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:44Z","lastTransitionTime":"2026-02-03T00:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.301794 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/039e204d-4d36-471e-990f-4eb5b4a193fc-metrics-certs\") pod \"network-metrics-daemon-hzk9m\" (UID: \"039e204d-4d36-471e-990f-4eb5b4a193fc\") " pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:15:44 crc kubenswrapper[4798]: E0203 00:15:44.301988 4798 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 00:15:44 crc kubenswrapper[4798]: E0203 00:15:44.302062 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/039e204d-4d36-471e-990f-4eb5b4a193fc-metrics-certs podName:039e204d-4d36-471e-990f-4eb5b4a193fc nodeName:}" failed. No retries permitted until 2026-02-03 00:15:45.30204152 +0000 UTC m=+37.068031561 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/039e204d-4d36-471e-990f-4eb5b4a193fc-metrics-certs") pod "network-metrics-daemon-hzk9m" (UID: "039e204d-4d36-471e-990f-4eb5b4a193fc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.310021 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:44Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.333584 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d9827e9-9254-47d3-8104-063cecf114b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6167533a6a616c137b890a0878d4ca54de45315ddf8799777a78b065f287f8f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5aaaf6e417770868b0d07795a8dce7a762fff69be8f67fa8b18f2d725b3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://174155cc26bc82aa227742b9171a1bdc7e162fc52f673214b383d6fe29d6171e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78de22de18e933207990153b342930848b80ccb2a9f4111a17e195a988d4b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5985e1d80e9ba103ce9794b795647710c64f0bdf9f6b6c7ca7e3b1cc4bd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:44Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.349618 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84295e9a-e49f-4550-8b66-c13c1d9a31ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e69941f257b4b15956eceee179fe4480d582215ae076f2757f5ead357c5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a58ca61528f9b3bf28c062c16d1b924e68f3a7bc60438abbb9e6d726023b713b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75e6cbd2a5c8ecdfeb505388b2c00877ab436a5434a9ffb2c15fd1a6cbde36b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b848412f3f729b0be1246c7aa6d61d8a64e93fdbdaa5899e447dd24adb39f185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b416e7549939712a710931ff8eeed3b360c4a52f48ca3e003d48be6a9dde5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:44Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.366298 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bb16c3599d6149d6981f97d083ee866ab1f157a08b1ba9825442920b5af8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:44Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.383956 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8mqs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a80722df-c977-49e0-b1ec-a83fea1c4f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6c76bd68818c8e42c934e6a93bcffb93289d3993a975f8490ce5dc2d7db601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwh9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8mqs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:44Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.400039 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.400084 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.400096 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.400110 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.400120 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:44Z","lastTransitionTime":"2026-02-03T00:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.404632 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:44Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.425329 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da2defc8ae8b05759890f95063d7f6820ae261639188fbf19290f98cfcc080de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5cc605f6582cf6832be9a756695825774f5ce389556957d9769286d0e3640f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:44Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.438468 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4nx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f968882a-8e40-4760-b1fe-2d456390d30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a047e7a57cedd76d9c1585aa1a50f66fc1ace45b59f9d32e284d44c940a79f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swvtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4nx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:44Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.450102 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6602c86-f236-4772-b70f-a8b4847b95dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363ed58e5d7ec642280fa4eeb3b84af899499279e5cca2f44a3762721b45758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a062edfbf637aa7eed0473b03b10011ed3e0b8647ca701c9b67bd16abe6fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b842j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:44Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.465736 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60953d56-8dc2-4adf-96bc-078a558476e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8504c36033083a61ef0f41fbba999a505d9d387e5c23264a35125fd50e69bf2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nhpkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:44Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.488098 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71790a2-e390-400a-a288-2a3af8467047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a50646fe0204516756577642b1e5dfe13e0e71070560b2add7b44f34c610b11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21f82fb24c3cc8120b8961e97b13d0ac424998a4e57ccaeecf00afd91f1fdec2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T00:15:40Z\\\",\\\"message\\\":\\\"oval\\\\nI0203 00:15:40.535323 6074 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0203 00:15:40.535374 6074 handler.go:208] Removed *v1.Node event handler 7\\\\nI0203 00:15:40.537308 6074 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0203 00:15:40.537336 6074 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0203 00:15:40.537349 6074 handler.go:208] Removed *v1.Node event handler 2\\\\nI0203 00:15:40.539280 6074 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0203 00:15:40.539342 6074 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0203 00:15:40.539350 6074 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0203 00:15:40.539376 6074 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0203 00:15:40.539453 6074 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0203 00:15:40.539486 6074 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0203 00:15:40.539496 6074 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0203 00:15:40.539503 6074 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0203 00:15:40.539510 6074 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0203 00:15:40.539516 6074 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0203 00:15:40.540297 6074 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gzlj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:44Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.503528 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.503588 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.503605 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.503633 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.503670 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:44Z","lastTransitionTime":"2026-02-03T00:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.605968 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.606013 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.606025 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.606044 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.606061 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:44Z","lastTransitionTime":"2026-02-03T00:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.614061 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.614108 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.614120 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.614162 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.614175 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:44Z","lastTransitionTime":"2026-02-03T00:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:44 crc kubenswrapper[4798]: E0203 00:15:44.632982 4798 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a6f95815-93e8-4b6e-91cb-e8db1e0a2c7b\\\",\\\"systemUUID\\\":\\\"d689d10a-78fe-472b-864b-496c283a966b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:44Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.638002 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.638062 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.638079 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.638100 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.638117 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:44Z","lastTransitionTime":"2026-02-03T00:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:44 crc kubenswrapper[4798]: E0203 00:15:44.651986 4798 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a6f95815-93e8-4b6e-91cb-e8db1e0a2c7b\\\",\\\"systemUUID\\\":\\\"d689d10a-78fe-472b-864b-496c283a966b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:44Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.663583 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.663647 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.663704 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.663734 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.663756 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:44Z","lastTransitionTime":"2026-02-03T00:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:44 crc kubenswrapper[4798]: E0203 00:15:44.686325 4798 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a6f95815-93e8-4b6e-91cb-e8db1e0a2c7b\\\",\\\"systemUUID\\\":\\\"d689d10a-78fe-472b-864b-496c283a966b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:44Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.691088 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.691130 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.691142 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.691159 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.691171 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:44Z","lastTransitionTime":"2026-02-03T00:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:44 crc kubenswrapper[4798]: E0203 00:15:44.712966 4798 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a6f95815-93e8-4b6e-91cb-e8db1e0a2c7b\\\",\\\"systemUUID\\\":\\\"d689d10a-78fe-472b-864b-496c283a966b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:44Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.717114 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.717166 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.717178 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.717197 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.717209 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:44Z","lastTransitionTime":"2026-02-03T00:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:44 crc kubenswrapper[4798]: E0203 00:15:44.737710 4798 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a6f95815-93e8-4b6e-91cb-e8db1e0a2c7b\\\",\\\"systemUUID\\\":\\\"d689d10a-78fe-472b-864b-496c283a966b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:44Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:44 crc kubenswrapper[4798]: E0203 00:15:44.737857 4798 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.739746 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.739785 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.739797 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.739814 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.739827 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:44Z","lastTransitionTime":"2026-02-03T00:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.842921 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.843337 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.843358 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.843384 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.843402 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:44Z","lastTransitionTime":"2026-02-03T00:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.867262 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 17:27:33.395277031 +0000 UTC Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.907340 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:15:44 crc kubenswrapper[4798]: E0203 00:15:44.907522 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.946738 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.946812 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.946836 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.946867 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:44 crc kubenswrapper[4798]: I0203 00:15:44.946893 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:44Z","lastTransitionTime":"2026-02-03T00:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.049090 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.049155 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.049177 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.049198 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.049213 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:45Z","lastTransitionTime":"2026-02-03T00:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.151775 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.151831 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.151849 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.151873 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.151890 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:45Z","lastTransitionTime":"2026-02-03T00:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.213572 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gzlj4_b71790a2-e390-400a-a288-2a3af8467047/ovnkube-controller/1.log" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.214557 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gzlj4_b71790a2-e390-400a-a288-2a3af8467047/ovnkube-controller/0.log" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.218528 4798 generic.go:334] "Generic (PLEG): container finished" podID="b71790a2-e390-400a-a288-2a3af8467047" containerID="a50646fe0204516756577642b1e5dfe13e0e71070560b2add7b44f34c610b11c" exitCode=1 Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.218639 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" event={"ID":"b71790a2-e390-400a-a288-2a3af8467047","Type":"ContainerDied","Data":"a50646fe0204516756577642b1e5dfe13e0e71070560b2add7b44f34c610b11c"} Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.218770 4798 scope.go:117] "RemoveContainer" containerID="21f82fb24c3cc8120b8961e97b13d0ac424998a4e57ccaeecf00afd91f1fdec2" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.220034 4798 scope.go:117] "RemoveContainer" containerID="a50646fe0204516756577642b1e5dfe13e0e71070560b2add7b44f34c610b11c" Feb 03 00:15:45 crc kubenswrapper[4798]: E0203 00:15:45.220272 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-gzlj4_openshift-ovn-kubernetes(b71790a2-e390-400a-a288-2a3af8467047)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" podUID="b71790a2-e390-400a-a288-2a3af8467047" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.244329 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bb16c3599d6149d6981f97d083ee866ab1f157a08b1ba9825442920b5af8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:45Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.255521 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.255564 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.255582 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.255605 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.255621 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:45Z","lastTransitionTime":"2026-02-03T00:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.264888 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8mqs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a80722df-c977-49e0-b1ec-a83fea1c4f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6c76bd68818c8e42c934e6a93bcffb93289d3993a975f8490ce5dc2d7db601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwh9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8mqs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:45Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.304402 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d9827e9-9254-47d3-8104-063cecf114b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6167533a6a616c137b890a0878d4ca54de45315ddf8799777a78b065f287f8f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5aaaf6e417770868b0d07795a8dce7a762fff69be8f67fa8b18f2d725b3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://174155cc26bc82aa227742b9171a1bdc7e162fc52f673214b383d6fe29d6171e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78de22de18e933207990153b342930848b80ccb2a9f4111a17e195a988d4b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5985e1d80e9ba103ce9794b795647710c64f0bdf9f6b6c7ca7e3b1cc4bd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:45Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.311564 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/039e204d-4d36-471e-990f-4eb5b4a193fc-metrics-certs\") pod \"network-metrics-daemon-hzk9m\" (UID: \"039e204d-4d36-471e-990f-4eb5b4a193fc\") " pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:15:45 crc kubenswrapper[4798]: E0203 00:15:45.311710 4798 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 00:15:45 crc kubenswrapper[4798]: E0203 00:15:45.311763 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/039e204d-4d36-471e-990f-4eb5b4a193fc-metrics-certs podName:039e204d-4d36-471e-990f-4eb5b4a193fc nodeName:}" failed. No retries permitted until 2026-02-03 00:15:47.311747878 +0000 UTC m=+39.077737889 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/039e204d-4d36-471e-990f-4eb5b4a193fc-metrics-certs") pod "network-metrics-daemon-hzk9m" (UID: "039e204d-4d36-471e-990f-4eb5b4a193fc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.328185 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84295e9a-e49f-4550-8b66-c13c1d9a31ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e69941f257b4b15956eceee179fe4480d582215ae076f2757f5ead357c5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a58ca61528f9b3bf28c062c16d1b924e68f3a7bc60438abbb9e6d726023b713b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75e6cbd2a5c8ecdfeb505388b2c00877ab436a5434a9ffb2c15fd1a6cbde36b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b848412f3f729b0be1246c7aa6d61d8a64e93fdbdaa5899e447dd24adb39f185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b416e7549939712a710931ff8eeed3b360c4a52f48ca3e003d48be6a9dde5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:45Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.343933 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4nx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f968882a-8e40-4760-b1fe-2d456390d30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a047e7a57cedd76d9c1585aa1a50f66fc1ace45b59f9d32e284d44c940a79f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swvtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4nx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:45Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.359078 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.359152 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.359170 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.359209 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.359231 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:45Z","lastTransitionTime":"2026-02-03T00:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.361618 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6602c86-f236-4772-b70f-a8b4847b95dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363ed58e5d7ec642280fa4eeb3b84af899499279e5cca2f44a3762721b45758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a062edfbf637aa7eed0473b03b10011ed3e0b8647ca701c9b67bd16abe6fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b842j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:45Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.388192 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60953d56-8dc2-4adf-96bc-078a558476e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8504c36033083a61ef0f41fbba999a505d9d387e5c23264a35125fd50e69bf2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nhpkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:45Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.419241 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71790a2-e390-400a-a288-2a3af8467047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a50646fe0204516756577642b1e5dfe13e0e71070560b2add7b44f34c610b11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21f82fb24c3cc8120b8961e97b13d0ac424998a4e57ccaeecf00afd91f1fdec2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T00:15:40Z\\\",\\\"message\\\":\\\"oval\\\\nI0203 00:15:40.535323 6074 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0203 00:15:40.535374 6074 handler.go:208] Removed *v1.Node event handler 7\\\\nI0203 00:15:40.537308 6074 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0203 00:15:40.537336 6074 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0203 00:15:40.537349 6074 handler.go:208] Removed *v1.Node event handler 2\\\\nI0203 00:15:40.539280 6074 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0203 00:15:40.539342 6074 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0203 00:15:40.539350 6074 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0203 00:15:40.539376 6074 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0203 00:15:40.539453 6074 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0203 00:15:40.539486 6074 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0203 00:15:40.539496 6074 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0203 00:15:40.539503 6074 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0203 00:15:40.539510 6074 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0203 00:15:40.539516 6074 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0203 00:15:40.540297 6074 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a50646fe0204516756577642b1e5dfe13e0e71070560b2add7b44f34c610b11c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"message\\\":\\\"212 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0203 00:15:44.080879 6212 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0203 00:15:44.081059 6212 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0203 00:15:44.080711 6212 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-b842j\\\\nI0203 00:15:44.081070 6212 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-b842j in node crc\\\\nI0203 00:15:44.081076 6212 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-b842j after 0 failed attempt(s)\\\\nI0203 00:15:44.081081 6212 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-b842j\\\\nF0203 00:15:44.080692 6212 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gzlj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:45Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.435921 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:45Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.456784 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da2defc8ae8b05759890f95063d7f6820ae261639188fbf19290f98cfcc080de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5cc605f6582cf6832be9a756695825774f5ce389556957d9769286d0e3640f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:45Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.462623 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.462683 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.462696 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.462714 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.462727 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:45Z","lastTransitionTime":"2026-02-03T00:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.475146 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hzk9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"039e204d-4d36-471e-990f-4eb5b4a193fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4pnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4pnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hzk9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:45Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.491141 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad68d98-2014-4595-bfa1-3c2d01ea9c1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ee0d4bdb8f37f7bbb7a130615b9df7a612fbcc74aadfb5eeb319935d50616d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da94f34ed139edbb39d3e777c72192e4d6d444b191e109b4c72ccb945513a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe6f4f8a090678c9eb76141cc1896ce9c032c026e768c884350cb64dace63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a3d5b07a894ad5cb3895f0d7890997de76f76e5dd6e4cf0d09096420e17a2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:45Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.506528 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ktf4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"106da5aa-5f2e-4d32-b172-4844ad6de7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f51a4a2b3b1ac421d5524a3776981e28eea2f2ae9d1e78977c96cc2f0faa894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4dpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ktf4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:45Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.521483 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j6wfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db2cbb1b-24c0-405a-ae34-2c1b4af6f999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1b3ce70bb8db498a796258ba1561e3e7fc260a82a2649e2e1a3e4132bcfe4d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89dk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd3875815c505e550007aa406ff23c56ac34176838363e4f894202ad69882389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89dk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j6wfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:45Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.533552 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:45Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.546282 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9f9dfba8a37fa4a39f59f8b4a7cdc9c8493d91a2aef18275fae7d7989f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:45Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.559462 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:45Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.565345 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.565388 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.565398 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.565418 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.565431 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:45Z","lastTransitionTime":"2026-02-03T00:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.668310 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.668368 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.668382 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.668411 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.668426 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:45Z","lastTransitionTime":"2026-02-03T00:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.771601 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.771705 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.771900 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.771955 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.771975 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:45Z","lastTransitionTime":"2026-02-03T00:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.867697 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 01:49:00.06998362 +0000 UTC Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.875099 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.875149 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.875162 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.875183 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.875198 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:45Z","lastTransitionTime":"2026-02-03T00:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.907613 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.907702 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.907724 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:15:45 crc kubenswrapper[4798]: E0203 00:15:45.907863 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:15:45 crc kubenswrapper[4798]: E0203 00:15:45.907990 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:15:45 crc kubenswrapper[4798]: E0203 00:15:45.908090 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.977769 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.977841 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.977854 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.977873 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:45 crc kubenswrapper[4798]: I0203 00:15:45.977886 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:45Z","lastTransitionTime":"2026-02-03T00:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.081685 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.081750 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.081766 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.081792 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.081812 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:46Z","lastTransitionTime":"2026-02-03T00:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.184851 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.184918 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.184941 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.184966 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.184984 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:46Z","lastTransitionTime":"2026-02-03T00:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.225106 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gzlj4_b71790a2-e390-400a-a288-2a3af8467047/ovnkube-controller/1.log" Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.289015 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.289082 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.289108 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.289137 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.289163 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:46Z","lastTransitionTime":"2026-02-03T00:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.392175 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.392236 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.392255 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.392277 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.392296 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:46Z","lastTransitionTime":"2026-02-03T00:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.495597 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.495687 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.495708 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.495731 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.495747 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:46Z","lastTransitionTime":"2026-02-03T00:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.598287 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.598356 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.598369 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.598386 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.598397 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:46Z","lastTransitionTime":"2026-02-03T00:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.701527 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.701598 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.701632 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.701699 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.701725 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:46Z","lastTransitionTime":"2026-02-03T00:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.804135 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.804204 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.804225 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.804252 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.804274 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:46Z","lastTransitionTime":"2026-02-03T00:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.868054 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 08:52:23.477721583 +0000 UTC Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.907256 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.907313 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.907368 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.907388 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.907412 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:46 crc kubenswrapper[4798]: I0203 00:15:46.907431 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:46Z","lastTransitionTime":"2026-02-03T00:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:46 crc kubenswrapper[4798]: E0203 00:15:46.907447 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.010376 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.010445 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.010456 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.010482 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.010496 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:47Z","lastTransitionTime":"2026-02-03T00:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.113714 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.113779 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.113793 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.113820 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.113831 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:47Z","lastTransitionTime":"2026-02-03T00:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.216796 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.216875 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.216891 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.216908 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.216919 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:47Z","lastTransitionTime":"2026-02-03T00:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.320213 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.320268 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.320277 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.320294 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.320305 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:47Z","lastTransitionTime":"2026-02-03T00:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.335176 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/039e204d-4d36-471e-990f-4eb5b4a193fc-metrics-certs\") pod \"network-metrics-daemon-hzk9m\" (UID: \"039e204d-4d36-471e-990f-4eb5b4a193fc\") " pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:15:47 crc kubenswrapper[4798]: E0203 00:15:47.335323 4798 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 00:15:47 crc kubenswrapper[4798]: E0203 00:15:47.335384 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/039e204d-4d36-471e-990f-4eb5b4a193fc-metrics-certs podName:039e204d-4d36-471e-990f-4eb5b4a193fc nodeName:}" failed. No retries permitted until 2026-02-03 00:15:51.335367152 +0000 UTC m=+43.101357163 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/039e204d-4d36-471e-990f-4eb5b4a193fc-metrics-certs") pod "network-metrics-daemon-hzk9m" (UID: "039e204d-4d36-471e-990f-4eb5b4a193fc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.423593 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.423697 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.423716 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.423748 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.423766 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:47Z","lastTransitionTime":"2026-02-03T00:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.528621 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.528723 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.528737 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.528755 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.528772 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:47Z","lastTransitionTime":"2026-02-03T00:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.631350 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.631399 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.631419 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.631445 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.631464 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:47Z","lastTransitionTime":"2026-02-03T00:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.734720 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.734799 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.734812 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.734830 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.734842 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:47Z","lastTransitionTime":"2026-02-03T00:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.838396 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.838464 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.838485 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.838508 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.838527 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:47Z","lastTransitionTime":"2026-02-03T00:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.868770 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 12:55:54.961228645 +0000 UTC Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.908060 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.908176 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.908207 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:15:47 crc kubenswrapper[4798]: E0203 00:15:47.908369 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:15:47 crc kubenswrapper[4798]: E0203 00:15:47.908476 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:15:47 crc kubenswrapper[4798]: E0203 00:15:47.908618 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.941356 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.941420 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.941438 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.941463 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:47 crc kubenswrapper[4798]: I0203 00:15:47.941479 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:47Z","lastTransitionTime":"2026-02-03T00:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.045247 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.045337 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.045361 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.045395 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.045440 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:48Z","lastTransitionTime":"2026-02-03T00:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.148347 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.148401 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.148412 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.148433 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.148447 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:48Z","lastTransitionTime":"2026-02-03T00:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.251584 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.251709 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.251733 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.251775 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.251793 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:48Z","lastTransitionTime":"2026-02-03T00:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.353822 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.353891 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.353907 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.353927 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.353941 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:48Z","lastTransitionTime":"2026-02-03T00:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.457564 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.457638 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.457701 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.457729 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.457747 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:48Z","lastTransitionTime":"2026-02-03T00:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.560174 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.560201 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.560209 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.560223 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.560235 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:48Z","lastTransitionTime":"2026-02-03T00:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.663357 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.663444 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.663460 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.663492 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.663509 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:48Z","lastTransitionTime":"2026-02-03T00:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.766817 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.766876 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.766900 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.766926 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.766945 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:48Z","lastTransitionTime":"2026-02-03T00:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.868984 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 09:37:12.882998187 +0000 UTC Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.870797 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.870867 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.870884 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.870908 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.870924 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:48Z","lastTransitionTime":"2026-02-03T00:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.907545 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:15:48 crc kubenswrapper[4798]: E0203 00:15:48.907732 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.926998 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bb16c3599d6149d6981f97d083ee866ab1f157a08b1ba9825442920b5af8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:48Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.940180 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8mqs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a80722df-c977-49e0-b1ec-a83fea1c4f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6c76bd68818c8e42c934e6a93bcffb93289d3993a975f8490ce5dc2d7db601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwh9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8mqs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:48Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.966558 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d9827e9-9254-47d3-8104-063cecf114b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6167533a6a616c137b890a0878d4ca54de45315ddf8799777a78b065f287f8f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5aaaf6e417770868b0d07795a8dce7a762fff69be8f67fa8b18f2d725b3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://174155cc26bc82aa227742b9171a1bdc7e162fc52f673214b383d6fe29d6171e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78de22de18e933207990153b342930848b80ccb2a9f4111a17e195a988d4b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5985e1d80e9ba103ce9794b795647710c64f0bdf9f6b6c7ca7e3b1cc4bd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:48Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.973028 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.973094 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.973104 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.973118 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.973127 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:48Z","lastTransitionTime":"2026-02-03T00:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:48 crc kubenswrapper[4798]: I0203 00:15:48.982689 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84295e9a-e49f-4550-8b66-c13c1d9a31ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e69941f257b4b15956eceee179fe4480d582215ae076f2757f5ead357c5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a58ca61528f9b3bf28c062c16d1b924e68f3a7bc60438abbb9e6d726023b713b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75e6cbd2a5c8ecdfeb505388b2c00877ab436a5434a9ffb2c15fd1a6cbde36b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b848412f3f729b0be1246c7aa6d61d8a64e93fdbdaa5899e447dd24adb39f185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b416e7549939712a710931ff8eeed3b360c4a52f48ca3e003d48be6a9dde5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:48Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.000198 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da2defc8ae8b05759890f95063d7f6820ae261639188fbf19290f98cfcc080de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5cc605f6582cf6832be9a756695825774f5ce389556957d9769286d0e3640f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:48Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.012410 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4nx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f968882a-8e40-4760-b1fe-2d456390d30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a047e7a57cedd76d9c1585aa1a50f66fc1ace45b59f9d32e284d44c940a79f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swvtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4nx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:49Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.025275 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6602c86-f236-4772-b70f-a8b4847b95dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363ed58e5d7ec642280fa4eeb3b84af899499279e5cca2f44a3762721b45758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a062edfbf637aa7eed0473b03b10011ed3e0b8647ca701c9b67bd16abe6fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b842j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:49Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.043852 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60953d56-8dc2-4adf-96bc-078a558476e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8504c36033083a61ef0f41fbba999a505d9d387e5c23264a35125fd50e69bf2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nhpkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:49Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.067335 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71790a2-e390-400a-a288-2a3af8467047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a50646fe0204516756577642b1e5dfe13e0e71070560b2add7b44f34c610b11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21f82fb24c3cc8120b8961e97b13d0ac424998a4e57ccaeecf00afd91f1fdec2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T00:15:40Z\\\",\\\"message\\\":\\\"oval\\\\nI0203 00:15:40.535323 6074 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0203 00:15:40.535374 6074 handler.go:208] Removed *v1.Node event handler 7\\\\nI0203 00:15:40.537308 6074 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0203 00:15:40.537336 6074 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0203 00:15:40.537349 6074 handler.go:208] Removed *v1.Node event handler 2\\\\nI0203 00:15:40.539280 6074 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0203 00:15:40.539342 6074 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0203 00:15:40.539350 6074 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0203 00:15:40.539376 6074 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0203 00:15:40.539453 6074 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0203 00:15:40.539486 6074 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0203 00:15:40.539496 6074 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0203 00:15:40.539503 6074 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0203 00:15:40.539510 6074 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0203 00:15:40.539516 6074 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0203 00:15:40.540297 6074 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a50646fe0204516756577642b1e5dfe13e0e71070560b2add7b44f34c610b11c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"message\\\":\\\"212 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0203 00:15:44.080879 6212 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0203 00:15:44.081059 6212 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0203 00:15:44.080711 6212 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-b842j\\\\nI0203 00:15:44.081070 6212 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-b842j in node crc\\\\nI0203 00:15:44.081076 6212 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-b842j after 0 failed attempt(s)\\\\nI0203 00:15:44.081081 6212 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-b842j\\\\nF0203 00:15:44.080692 6212 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gzlj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:49Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.076573 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.076613 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.076625 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.076641 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.076680 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:49Z","lastTransitionTime":"2026-02-03T00:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.081410 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:49Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.094342 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j6wfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db2cbb1b-24c0-405a-ae34-2c1b4af6f999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1b3ce70bb8db498a796258ba1561e3e7fc260a82a2649e2e1a3e4132bcfe4d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89dk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd3875815c505e550007aa406ff23c56ac34176838363e4f894202ad69882389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89dk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j6wfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:49Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.107900 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hzk9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"039e204d-4d36-471e-990f-4eb5b4a193fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4pnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4pnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hzk9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:49Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.119634 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad68d98-2014-4595-bfa1-3c2d01ea9c1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ee0d4bdb8f37f7bbb7a130615b9df7a612fbcc74aadfb5eeb319935d50616d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da94f34ed139edbb39d3e777c72192e4d6d444b191e109b4c72ccb945513a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe6f4f8a090678c9eb76141cc1896ce9c032c026e768c884350cb64dace63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a3d5b07a894ad5cb3895f0d7890997de76f76e5dd6e4cf0d09096420e17a2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:49Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.137055 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ktf4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"106da5aa-5f2e-4d32-b172-4844ad6de7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f51a4a2b3b1ac421d5524a3776981e28eea2f2ae9d1e78977c96cc2f0faa894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4dpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ktf4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:49Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.150432 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:49Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.166808 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:49Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.179491 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.179530 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.179540 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.179554 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.179564 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:49Z","lastTransitionTime":"2026-02-03T00:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.181116 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9f9dfba8a37fa4a39f59f8b4a7cdc9c8493d91a2aef18275fae7d7989f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:49Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.281913 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.281996 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.282022 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.282061 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.282084 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:49Z","lastTransitionTime":"2026-02-03T00:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.385033 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.385093 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.385113 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.385138 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.385159 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:49Z","lastTransitionTime":"2026-02-03T00:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.488015 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.488065 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.488082 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.488096 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.488110 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:49Z","lastTransitionTime":"2026-02-03T00:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.590598 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.590687 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.590705 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.590728 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.590745 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:49Z","lastTransitionTime":"2026-02-03T00:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.693918 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.694006 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.694032 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.694063 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.694089 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:49Z","lastTransitionTime":"2026-02-03T00:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.797556 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.797609 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.797623 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.797689 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.797703 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:49Z","lastTransitionTime":"2026-02-03T00:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.869860 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 15:31:45.381482101 +0000 UTC Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.900996 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.901085 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.901114 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.901142 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.901166 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:49Z","lastTransitionTime":"2026-02-03T00:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.907440 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.907475 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:15:49 crc kubenswrapper[4798]: I0203 00:15:49.907588 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:15:49 crc kubenswrapper[4798]: E0203 00:15:49.907809 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:15:49 crc kubenswrapper[4798]: E0203 00:15:49.907989 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:15:49 crc kubenswrapper[4798]: E0203 00:15:49.908099 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.004836 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.004929 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.004952 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.004983 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.005006 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:50Z","lastTransitionTime":"2026-02-03T00:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.107554 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.107622 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.107645 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.107716 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.107734 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:50Z","lastTransitionTime":"2026-02-03T00:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.210050 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.210113 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.210134 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.210163 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.210187 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:50Z","lastTransitionTime":"2026-02-03T00:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.313538 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.313598 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.313622 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.313688 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.313714 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:50Z","lastTransitionTime":"2026-02-03T00:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.416510 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.416585 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.416604 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.416629 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.416648 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:50Z","lastTransitionTime":"2026-02-03T00:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.519395 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.519446 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.519457 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.519475 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.519490 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:50Z","lastTransitionTime":"2026-02-03T00:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.622229 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.622258 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.622266 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.622278 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.622286 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:50Z","lastTransitionTime":"2026-02-03T00:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.724913 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.724987 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.725000 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.725020 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.725041 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:50Z","lastTransitionTime":"2026-02-03T00:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.828802 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.828882 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.828901 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.828926 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.828943 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:50Z","lastTransitionTime":"2026-02-03T00:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.870595 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 13:42:41.626914551 +0000 UTC Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.908346 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:15:50 crc kubenswrapper[4798]: E0203 00:15:50.908562 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.931647 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.931743 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.931762 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.931787 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:50 crc kubenswrapper[4798]: I0203 00:15:50.931808 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:50Z","lastTransitionTime":"2026-02-03T00:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.035633 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.035756 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.035774 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.035803 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.035822 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:51Z","lastTransitionTime":"2026-02-03T00:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.138810 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.138878 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.138895 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.138919 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.138941 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:51Z","lastTransitionTime":"2026-02-03T00:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.243129 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.243200 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.243218 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.243248 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.243268 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:51Z","lastTransitionTime":"2026-02-03T00:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.347282 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.347357 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.347370 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.347399 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.347415 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:51Z","lastTransitionTime":"2026-02-03T00:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.376509 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/039e204d-4d36-471e-990f-4eb5b4a193fc-metrics-certs\") pod \"network-metrics-daemon-hzk9m\" (UID: \"039e204d-4d36-471e-990f-4eb5b4a193fc\") " pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:15:51 crc kubenswrapper[4798]: E0203 00:15:51.376766 4798 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 00:15:51 crc kubenswrapper[4798]: E0203 00:15:51.376895 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/039e204d-4d36-471e-990f-4eb5b4a193fc-metrics-certs podName:039e204d-4d36-471e-990f-4eb5b4a193fc nodeName:}" failed. No retries permitted until 2026-02-03 00:15:59.376869549 +0000 UTC m=+51.142859570 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/039e204d-4d36-471e-990f-4eb5b4a193fc-metrics-certs") pod "network-metrics-daemon-hzk9m" (UID: "039e204d-4d36-471e-990f-4eb5b4a193fc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.451088 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.451169 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.451187 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.451403 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.451423 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:51Z","lastTransitionTime":"2026-02-03T00:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.555096 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.555168 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.555181 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.555205 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.555223 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:51Z","lastTransitionTime":"2026-02-03T00:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.658668 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.658742 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.658759 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.658786 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.658801 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:51Z","lastTransitionTime":"2026-02-03T00:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.761607 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.761718 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.761775 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.761808 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.761830 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:51Z","lastTransitionTime":"2026-02-03T00:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.864850 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.864908 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.864921 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.864951 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.864970 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:51Z","lastTransitionTime":"2026-02-03T00:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.871293 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 23:11:54.475397967 +0000 UTC Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.907910 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.908001 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:15:51 crc kubenswrapper[4798]: E0203 00:15:51.908064 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.907927 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:15:51 crc kubenswrapper[4798]: E0203 00:15:51.908186 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:15:51 crc kubenswrapper[4798]: E0203 00:15:51.908419 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.968123 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.968174 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.968185 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.968203 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:51 crc kubenswrapper[4798]: I0203 00:15:51.968216 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:51Z","lastTransitionTime":"2026-02-03T00:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.069919 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.069956 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.069967 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.069982 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.069994 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:52Z","lastTransitionTime":"2026-02-03T00:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.173462 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.173524 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.173541 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.173563 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.173582 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:52Z","lastTransitionTime":"2026-02-03T00:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.276861 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.276905 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.276919 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.276936 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.276948 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:52Z","lastTransitionTime":"2026-02-03T00:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.380380 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.380433 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.380450 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.380478 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.380493 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:52Z","lastTransitionTime":"2026-02-03T00:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.484018 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.484080 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.484102 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.484129 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.484150 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:52Z","lastTransitionTime":"2026-02-03T00:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.587427 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.587506 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.587524 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.587543 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.587560 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:52Z","lastTransitionTime":"2026-02-03T00:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.690767 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.690823 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.690843 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.690869 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.690893 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:52Z","lastTransitionTime":"2026-02-03T00:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.793564 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.793618 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.793694 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.793733 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.793753 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:52Z","lastTransitionTime":"2026-02-03T00:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.872297 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 22:37:28.289537732 +0000 UTC Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.896393 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.896453 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.896472 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.896496 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.896515 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:52Z","lastTransitionTime":"2026-02-03T00:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.907944 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:15:52 crc kubenswrapper[4798]: E0203 00:15:52.908163 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.999078 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.999144 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.999163 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.999188 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:52 crc kubenswrapper[4798]: I0203 00:15:52.999206 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:52Z","lastTransitionTime":"2026-02-03T00:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.102016 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.102056 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.102067 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.102082 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.102094 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:53Z","lastTransitionTime":"2026-02-03T00:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.206918 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.206994 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.207030 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.207050 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.207062 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:53Z","lastTransitionTime":"2026-02-03T00:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.309301 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.309360 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.309379 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.309402 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.309421 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:53Z","lastTransitionTime":"2026-02-03T00:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.412490 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.412547 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.412564 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.412586 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.412603 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:53Z","lastTransitionTime":"2026-02-03T00:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.515732 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.515829 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.515856 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.515889 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.515909 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:53Z","lastTransitionTime":"2026-02-03T00:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.619278 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.619352 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.619372 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.619397 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.619418 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:53Z","lastTransitionTime":"2026-02-03T00:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.722126 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.722181 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.722194 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.722214 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.722230 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:53Z","lastTransitionTime":"2026-02-03T00:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.824986 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.825037 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.825047 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.825069 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.825081 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:53Z","lastTransitionTime":"2026-02-03T00:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.872577 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 04:17:34.253013347 +0000 UTC Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.907623 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.907728 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.907623 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:15:53 crc kubenswrapper[4798]: E0203 00:15:53.907823 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:15:53 crc kubenswrapper[4798]: E0203 00:15:53.907936 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:15:53 crc kubenswrapper[4798]: E0203 00:15:53.908132 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.929409 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.929486 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.929503 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.929526 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:53 crc kubenswrapper[4798]: I0203 00:15:53.929547 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:53Z","lastTransitionTime":"2026-02-03T00:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.033049 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.033118 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.033143 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.033172 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.033194 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:54Z","lastTransitionTime":"2026-02-03T00:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.136424 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.136713 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.136752 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.136785 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.136811 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:54Z","lastTransitionTime":"2026-02-03T00:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.239606 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.239646 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.239677 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.239693 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.239704 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:54Z","lastTransitionTime":"2026-02-03T00:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.342617 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.342672 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.342680 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.342693 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.342701 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:54Z","lastTransitionTime":"2026-02-03T00:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.445466 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.445507 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.445518 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.445535 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.445547 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:54Z","lastTransitionTime":"2026-02-03T00:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.548693 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.548751 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.548767 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.548787 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.548802 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:54Z","lastTransitionTime":"2026-02-03T00:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.651402 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.651462 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.651484 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.651505 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.651525 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:54Z","lastTransitionTime":"2026-02-03T00:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.754464 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.754496 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.754506 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.754519 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.754528 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:54Z","lastTransitionTime":"2026-02-03T00:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.857311 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.857374 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.857397 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.857425 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.857446 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:54Z","lastTransitionTime":"2026-02-03T00:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.873150 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 14:34:58.916126443 +0000 UTC Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.909543 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:15:54 crc kubenswrapper[4798]: E0203 00:15:54.910294 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.960953 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.961031 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.961055 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.961096 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.961122 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:54Z","lastTransitionTime":"2026-02-03T00:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.962835 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.962885 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.962902 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.962923 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.962939 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:54Z","lastTransitionTime":"2026-02-03T00:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:54 crc kubenswrapper[4798]: E0203 00:15:54.985367 4798 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a6f95815-93e8-4b6e-91cb-e8db1e0a2c7b\\\",\\\"systemUUID\\\":\\\"d689d10a-78fe-472b-864b-496c283a966b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:54Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.998098 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.998176 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.998201 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.998230 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:54 crc kubenswrapper[4798]: I0203 00:15:54.998253 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:54Z","lastTransitionTime":"2026-02-03T00:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:55 crc kubenswrapper[4798]: E0203 00:15:55.016746 4798 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a6f95815-93e8-4b6e-91cb-e8db1e0a2c7b\\\",\\\"systemUUID\\\":\\\"d689d10a-78fe-472b-864b-496c283a966b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:55Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.022687 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.022739 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.022757 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.022819 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.022838 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:55Z","lastTransitionTime":"2026-02-03T00:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:55 crc kubenswrapper[4798]: E0203 00:15:55.041816 4798 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a6f95815-93e8-4b6e-91cb-e8db1e0a2c7b\\\",\\\"systemUUID\\\":\\\"d689d10a-78fe-472b-864b-496c283a966b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:55Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.047846 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.047892 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.047904 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.047930 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.047949 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:55Z","lastTransitionTime":"2026-02-03T00:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:55 crc kubenswrapper[4798]: E0203 00:15:55.064377 4798 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a6f95815-93e8-4b6e-91cb-e8db1e0a2c7b\\\",\\\"systemUUID\\\":\\\"d689d10a-78fe-472b-864b-496c283a966b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:55Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.069865 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.069947 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.069967 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.069998 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.070021 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:55Z","lastTransitionTime":"2026-02-03T00:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:55 crc kubenswrapper[4798]: E0203 00:15:55.088255 4798 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:15:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a6f95815-93e8-4b6e-91cb-e8db1e0a2c7b\\\",\\\"systemUUID\\\":\\\"d689d10a-78fe-472b-864b-496c283a966b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:55Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:55 crc kubenswrapper[4798]: E0203 00:15:55.088539 4798 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.090770 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.090838 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.090855 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.090879 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.090899 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:55Z","lastTransitionTime":"2026-02-03T00:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.194579 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.194686 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.194708 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.194732 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.194749 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:55Z","lastTransitionTime":"2026-02-03T00:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.297641 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.297728 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.297772 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.297797 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.297813 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:55Z","lastTransitionTime":"2026-02-03T00:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.400891 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.400943 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.400960 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.400984 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.401002 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:55Z","lastTransitionTime":"2026-02-03T00:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.504753 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.504815 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.504836 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.504863 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.504881 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:55Z","lastTransitionTime":"2026-02-03T00:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.608329 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.608398 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.608431 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.608460 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.608482 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:55Z","lastTransitionTime":"2026-02-03T00:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.711152 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.711239 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.711259 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.711281 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.711298 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:55Z","lastTransitionTime":"2026-02-03T00:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.814131 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.814184 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.814205 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.814229 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.814246 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:55Z","lastTransitionTime":"2026-02-03T00:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.874335 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 18:11:54.822816475 +0000 UTC Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.907957 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.908024 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.908073 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:15:55 crc kubenswrapper[4798]: E0203 00:15:55.908131 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:15:55 crc kubenswrapper[4798]: E0203 00:15:55.908278 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:15:55 crc kubenswrapper[4798]: E0203 00:15:55.908458 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.917229 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.917275 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.917286 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.917304 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:55 crc kubenswrapper[4798]: I0203 00:15:55.917316 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:55Z","lastTransitionTime":"2026-02-03T00:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.020681 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.020768 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.020792 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.020822 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.020851 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:56Z","lastTransitionTime":"2026-02-03T00:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.123952 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.124033 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.124058 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.124089 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.124111 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:56Z","lastTransitionTime":"2026-02-03T00:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.227460 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.227526 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.227564 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.227592 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.227613 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:56Z","lastTransitionTime":"2026-02-03T00:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.330648 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.330726 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.330738 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.330776 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.330793 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:56Z","lastTransitionTime":"2026-02-03T00:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.435846 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.435968 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.435996 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.436027 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.436051 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:56Z","lastTransitionTime":"2026-02-03T00:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.539797 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.539883 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.539919 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.539947 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.539973 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:56Z","lastTransitionTime":"2026-02-03T00:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.643539 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.643606 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.643624 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.643649 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.643700 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:56Z","lastTransitionTime":"2026-02-03T00:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.747251 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.747308 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.747325 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.747351 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.747369 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:56Z","lastTransitionTime":"2026-02-03T00:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.850759 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.850816 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.850827 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.850846 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.850860 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:56Z","lastTransitionTime":"2026-02-03T00:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.874478 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 18:58:00.003354509 +0000 UTC Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.908229 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:15:56 crc kubenswrapper[4798]: E0203 00:15:56.908615 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.909475 4798 scope.go:117] "RemoveContainer" containerID="a50646fe0204516756577642b1e5dfe13e0e71070560b2add7b44f34c610b11c" Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.929293 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8mqs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a80722df-c977-49e0-b1ec-a83fea1c4f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6c76bd68818c8e42c934e6a93bcffb93289d3993a975f8490ce5dc2d7db601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwh9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8mqs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:56Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.954959 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.955008 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.955021 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.955038 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.955051 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:56Z","lastTransitionTime":"2026-02-03T00:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.958376 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d9827e9-9254-47d3-8104-063cecf114b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6167533a6a616c137b890a0878d4ca54de45315ddf8799777a78b065f287f8f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5aaaf6e417770868b0d07795a8dce7a762fff69be8f67fa8b18f2d725b3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://174155cc26bc82aa227742b9171a1bdc7e162fc52f673214b383d6fe29d6171e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78de22de18e933207990153b342930848b80ccb2a9f4111a17e195a988d4b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5985e1d80e9ba103ce9794b795647710c64f0bdf9f6b6c7ca7e3b1cc4bd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:56Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:56 crc kubenswrapper[4798]: I0203 00:15:56.980123 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84295e9a-e49f-4550-8b66-c13c1d9a31ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e69941f257b4b15956eceee179fe4480d582215ae076f2757f5ead357c5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a58ca61528f9b3bf28c062c16d1b924e68f3a7bc60438abbb9e6d726023b713b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75e6cbd2a5c8ecdfeb505388b2c00877ab436a5434a9ffb2c15fd1a6cbde36b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b848412f3f729b0be1246c7aa6d61d8a64e93fdbdaa5899e447dd24adb39f185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b416e7549939712a710931ff8eeed3b360c4a52f48ca3e003d48be6a9dde5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:56Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.004300 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bb16c3599d6149d6981f97d083ee866ab1f157a08b1ba9825442920b5af8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:57Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.023028 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6602c86-f236-4772-b70f-a8b4847b95dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363ed58e5d7ec642280fa4eeb3b84af899499279e5cca2f44a3762721b45758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a062edfbf637aa7eed0473b03b10011ed3e0b8647ca701c9b67bd16abe6fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b842j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:57Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.047321 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60953d56-8dc2-4adf-96bc-078a558476e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8504c36033083a61ef0f41fbba999a505d9d387e5c23264a35125fd50e69bf2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nhpkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:57Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.059436 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.059534 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.059564 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.059596 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.059623 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:57Z","lastTransitionTime":"2026-02-03T00:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.073567 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71790a2-e390-400a-a288-2a3af8467047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a50646fe0204516756577642b1e5dfe13e0e71070560b2add7b44f34c610b11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a50646fe0204516756577642b1e5dfe13e0e71070560b2add7b44f34c610b11c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"message\\\":\\\"212 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0203 00:15:44.080879 6212 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0203 00:15:44.081059 6212 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0203 00:15:44.080711 6212 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-b842j\\\\nI0203 00:15:44.081070 6212 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-b842j in node crc\\\\nI0203 00:15:44.081076 6212 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-b842j after 0 failed attempt(s)\\\\nI0203 00:15:44.081081 6212 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-b842j\\\\nF0203 00:15:44.080692 6212 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-gzlj4_openshift-ovn-kubernetes(b71790a2-e390-400a-a288-2a3af8467047)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gzlj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:57Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.095090 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:57Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.116482 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da2defc8ae8b05759890f95063d7f6820ae261639188fbf19290f98cfcc080de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5cc605f6582cf6832be9a756695825774f5ce389556957d9769286d0e3640f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:57Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.133236 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4nx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f968882a-8e40-4760-b1fe-2d456390d30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a047e7a57cedd76d9c1585aa1a50f66fc1ace45b59f9d32e284d44c940a79f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swvtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4nx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:57Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.155268 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad68d98-2014-4595-bfa1-3c2d01ea9c1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ee0d4bdb8f37f7bbb7a130615b9df7a612fbcc74aadfb5eeb319935d50616d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da94f34ed139edbb39d3e777c72192e4d6d444b191e109b4c72ccb945513a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe6f4f8a090678c9eb76141cc1896ce9c032c026e768c884350cb64dace63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a3d5b07a894ad5cb3895f0d7890997de76f76e5dd6e4cf0d09096420e17a2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:57Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.161912 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.161940 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.161953 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.161969 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.161982 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:57Z","lastTransitionTime":"2026-02-03T00:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.179379 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ktf4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"106da5aa-5f2e-4d32-b172-4844ad6de7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f51a4a2b3b1ac421d5524a3776981e28eea2f2ae9d1e78977c96cc2f0faa894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4dpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ktf4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:57Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.197534 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j6wfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db2cbb1b-24c0-405a-ae34-2c1b4af6f999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1b3ce70bb8db498a796258ba1561e3e7fc260a82a2649e2e1a3e4132bcfe4d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89dk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd3875815c505e550007aa406ff23c56ac34176838363e4f894202ad69882389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89dk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j6wfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:57Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.211472 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hzk9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"039e204d-4d36-471e-990f-4eb5b4a193fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4pnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4pnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hzk9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:57Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.225963 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:57Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.240613 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9f9dfba8a37fa4a39f59f8b4a7cdc9c8493d91a2aef18275fae7d7989f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:57Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.255173 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:57Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.264832 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.264877 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.264895 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.264913 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.264929 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:57Z","lastTransitionTime":"2026-02-03T00:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.277451 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gzlj4_b71790a2-e390-400a-a288-2a3af8467047/ovnkube-controller/1.log" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.280306 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" event={"ID":"b71790a2-e390-400a-a288-2a3af8467047","Type":"ContainerStarted","Data":"a113a5b1daade558722136f1090129bdafac6b1d135b7d3e911ed1105b9eb37c"} Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.280522 4798 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.314581 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71790a2-e390-400a-a288-2a3af8467047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a113a5b1daade558722136f1090129bdafac6b1d135b7d3e911ed1105b9eb37c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a50646fe0204516756577642b1e5dfe13e0e71070560b2add7b44f34c610b11c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"message\\\":\\\"212 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0203 00:15:44.080879 6212 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0203 00:15:44.081059 6212 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0203 00:15:44.080711 6212 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-b842j\\\\nI0203 00:15:44.081070 6212 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-b842j in node crc\\\\nI0203 00:15:44.081076 6212 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-b842j after 0 failed attempt(s)\\\\nI0203 00:15:44.081081 6212 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-b842j\\\\nF0203 00:15:44.080692 6212 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gzlj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:57Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.339197 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:57Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.367643 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da2defc8ae8b05759890f95063d7f6820ae261639188fbf19290f98cfcc080de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5cc605f6582cf6832be9a756695825774f5ce389556957d9769286d0e3640f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:57Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.369465 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.369516 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.369534 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.369557 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.369571 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:57Z","lastTransitionTime":"2026-02-03T00:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.385060 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4nx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f968882a-8e40-4760-b1fe-2d456390d30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a047e7a57cedd76d9c1585aa1a50f66fc1ace45b59f9d32e284d44c940a79f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swvtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4nx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:57Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.401142 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6602c86-f236-4772-b70f-a8b4847b95dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363ed58e5d7ec642280fa4eeb3b84af899499279e5cca2f44a3762721b45758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a062edfbf637aa7eed0473b03b10011ed3e0b8647ca701c9b67bd16abe6fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b842j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:57Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.419761 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60953d56-8dc2-4adf-96bc-078a558476e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8504c36033083a61ef0f41fbba999a505d9d387e5c23264a35125fd50e69bf2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nhpkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:57Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.438211 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad68d98-2014-4595-bfa1-3c2d01ea9c1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ee0d4bdb8f37f7bbb7a130615b9df7a612fbcc74aadfb5eeb319935d50616d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da94f34ed139edbb39d3e777c72192e4d6d444b191e109b4c72ccb945513a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe6f4f8a090678c9eb76141cc1896ce9c032c026e768c884350cb64dace63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a3d5b07a894ad5cb3895f0d7890997de76f76e5dd6e4cf0d09096420e17a2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:57Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.458225 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ktf4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"106da5aa-5f2e-4d32-b172-4844ad6de7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f51a4a2b3b1ac421d5524a3776981e28eea2f2ae9d1e78977c96cc2f0faa894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4dpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ktf4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:57Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.472343 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.472418 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.472436 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.472464 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.472512 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:57Z","lastTransitionTime":"2026-02-03T00:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.473877 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j6wfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db2cbb1b-24c0-405a-ae34-2c1b4af6f999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1b3ce70bb8db498a796258ba1561e3e7fc260a82a2649e2e1a3e4132bcfe4d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89dk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd3875815c505e550007aa406ff23c56ac34176838363e4f894202ad69882389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89dk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j6wfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:57Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.492766 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hzk9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"039e204d-4d36-471e-990f-4eb5b4a193fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4pnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4pnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hzk9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:57Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.516748 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:57Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.535437 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9f9dfba8a37fa4a39f59f8b4a7cdc9c8493d91a2aef18275fae7d7989f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:57Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.553450 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:57Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.574823 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.574875 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.574888 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.574910 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.574923 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:57Z","lastTransitionTime":"2026-02-03T00:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.583782 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d9827e9-9254-47d3-8104-063cecf114b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6167533a6a616c137b890a0878d4ca54de45315ddf8799777a78b065f287f8f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5aaaf6e417770868b0d07795a8dce7a762fff69be8f67fa8b18f2d725b3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://174155cc26bc82aa227742b9171a1bdc7e162fc52f673214b383d6fe29d6171e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78de22de18e933207990153b342930848b80ccb2a9f4111a17e195a988d4b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5985e1d80e9ba103ce9794b795647710c64f0bdf9f6b6c7ca7e3b1cc4bd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:57Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.603815 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84295e9a-e49f-4550-8b66-c13c1d9a31ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e69941f257b4b15956eceee179fe4480d582215ae076f2757f5ead357c5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a58ca61528f9b3bf28c062c16d1b924e68f3a7bc60438abbb9e6d726023b713b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75e6cbd2a5c8ecdfeb505388b2c00877ab436a5434a9ffb2c15fd1a6cbde36b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b848412f3f729b0be1246c7aa6d61d8a64e93fdbdaa5899e447dd24adb39f185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b416e7549939712a710931ff8eeed3b360c4a52f48ca3e003d48be6a9dde5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:57Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.622645 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bb16c3599d6149d6981f97d083ee866ab1f157a08b1ba9825442920b5af8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:57Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.643038 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8mqs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a80722df-c977-49e0-b1ec-a83fea1c4f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6c76bd68818c8e42c934e6a93bcffb93289d3993a975f8490ce5dc2d7db601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwh9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8mqs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:57Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.677409 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.677480 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.677492 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.677516 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.677532 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:57Z","lastTransitionTime":"2026-02-03T00:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.780177 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.780232 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.780247 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.780269 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.780283 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:57Z","lastTransitionTime":"2026-02-03T00:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.874690 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 21:50:06.7235513 +0000 UTC Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.882623 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.882715 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.882731 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.882754 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.882771 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:57Z","lastTransitionTime":"2026-02-03T00:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.907975 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.908029 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.908091 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:15:57 crc kubenswrapper[4798]: E0203 00:15:57.908160 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:15:57 crc kubenswrapper[4798]: E0203 00:15:57.908317 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:15:57 crc kubenswrapper[4798]: E0203 00:15:57.908442 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.985761 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.985826 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.985843 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.985869 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:57 crc kubenswrapper[4798]: I0203 00:15:57.985889 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:57Z","lastTransitionTime":"2026-02-03T00:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.097915 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.100122 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.100164 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.100174 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.100190 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.100203 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:58Z","lastTransitionTime":"2026-02-03T00:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.203312 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.203361 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.203373 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.203388 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.203400 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:58Z","lastTransitionTime":"2026-02-03T00:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.286416 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gzlj4_b71790a2-e390-400a-a288-2a3af8467047/ovnkube-controller/2.log" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.287296 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gzlj4_b71790a2-e390-400a-a288-2a3af8467047/ovnkube-controller/1.log" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.291167 4798 generic.go:334] "Generic (PLEG): container finished" podID="b71790a2-e390-400a-a288-2a3af8467047" containerID="a113a5b1daade558722136f1090129bdafac6b1d135b7d3e911ed1105b9eb37c" exitCode=1 Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.291234 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" event={"ID":"b71790a2-e390-400a-a288-2a3af8467047","Type":"ContainerDied","Data":"a113a5b1daade558722136f1090129bdafac6b1d135b7d3e911ed1105b9eb37c"} Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.291311 4798 scope.go:117] "RemoveContainer" containerID="a50646fe0204516756577642b1e5dfe13e0e71070560b2add7b44f34c610b11c" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.292280 4798 scope.go:117] "RemoveContainer" containerID="a113a5b1daade558722136f1090129bdafac6b1d135b7d3e911ed1105b9eb37c" Feb 03 00:15:58 crc kubenswrapper[4798]: E0203 00:15:58.292545 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gzlj4_openshift-ovn-kubernetes(b71790a2-e390-400a-a288-2a3af8467047)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" podUID="b71790a2-e390-400a-a288-2a3af8467047" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.306271 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.306422 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.306446 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.306474 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.306495 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:58Z","lastTransitionTime":"2026-02-03T00:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.314717 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:58Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.334948 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9f9dfba8a37fa4a39f59f8b4a7cdc9c8493d91a2aef18275fae7d7989f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:58Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.351523 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:58Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.382196 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d9827e9-9254-47d3-8104-063cecf114b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6167533a6a616c137b890a0878d4ca54de45315ddf8799777a78b065f287f8f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5aaaf6e417770868b0d07795a8dce7a762fff69be8f67fa8b18f2d725b3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://174155cc26bc82aa227742b9171a1bdc7e162fc52f673214b383d6fe29d6171e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78de22de18e933207990153b342930848b80ccb2a9f4111a17e195a988d4b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5985e1d80e9ba103ce9794b795647710c64f0bdf9f6b6c7ca7e3b1cc4bd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:58Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.403304 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84295e9a-e49f-4550-8b66-c13c1d9a31ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e69941f257b4b15956eceee179fe4480d582215ae076f2757f5ead357c5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a58ca61528f9b3bf28c062c16d1b924e68f3a7bc60438abbb9e6d726023b713b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75e6cbd2a5c8ecdfeb505388b2c00877ab436a5434a9ffb2c15fd1a6cbde36b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b848412f3f729b0be1246c7aa6d61d8a64e93fdbdaa5899e447dd24adb39f185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b416e7549939712a710931ff8eeed3b360c4a52f48ca3e003d48be6a9dde5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:58Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.408776 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.408824 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.408841 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.408865 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.408887 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:58Z","lastTransitionTime":"2026-02-03T00:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.423164 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bb16c3599d6149d6981f97d083ee866ab1f157a08b1ba9825442920b5af8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:58Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.438940 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8mqs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a80722df-c977-49e0-b1ec-a83fea1c4f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6c76bd68818c8e42c934e6a93bcffb93289d3993a975f8490ce5dc2d7db601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwh9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8mqs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:58Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.457348 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:58Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.477819 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da2defc8ae8b05759890f95063d7f6820ae261639188fbf19290f98cfcc080de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5cc605f6582cf6832be9a756695825774f5ce389556957d9769286d0e3640f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:58Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.494405 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4nx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f968882a-8e40-4760-b1fe-2d456390d30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a047e7a57cedd76d9c1585aa1a50f66fc1ace45b59f9d32e284d44c940a79f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swvtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4nx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:58Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.511836 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.511908 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.511931 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.511957 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.511975 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:58Z","lastTransitionTime":"2026-02-03T00:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.514516 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6602c86-f236-4772-b70f-a8b4847b95dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363ed58e5d7ec642280fa4eeb3b84af899499279e5cca2f44a3762721b45758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a062edfbf637aa7eed0473b03b10011ed3e0b8647ca701c9b67bd16abe6fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b842j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:58Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.538093 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60953d56-8dc2-4adf-96bc-078a558476e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8504c36033083a61ef0f41fbba999a505d9d387e5c23264a35125fd50e69bf2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nhpkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:58Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.561628 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71790a2-e390-400a-a288-2a3af8467047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a113a5b1daade558722136f1090129bdafac6b1d135b7d3e911ed1105b9eb37c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a50646fe0204516756577642b1e5dfe13e0e71070560b2add7b44f34c610b11c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"message\\\":\\\"212 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0203 00:15:44.080879 6212 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0203 00:15:44.081059 6212 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0203 00:15:44.080711 6212 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-b842j\\\\nI0203 00:15:44.081070 6212 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-b842j in node crc\\\\nI0203 00:15:44.081076 6212 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-b842j after 0 failed attempt(s)\\\\nI0203 00:15:44.081081 6212 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-b842j\\\\nF0203 00:15:44.080692 6212 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a113a5b1daade558722136f1090129bdafac6b1d135b7d3e911ed1105b9eb37c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T00:15:58Z\\\",\\\"message\\\":\\\"[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0203 00:15:58.026095 6420 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0203 00:15:58.026434 6420 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 00:15:58.026489 6420 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gzlj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:58Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.577131 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad68d98-2014-4595-bfa1-3c2d01ea9c1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ee0d4bdb8f37f7bbb7a130615b9df7a612fbcc74aadfb5eeb319935d50616d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da94f34ed139edbb39d3e777c72192e4d6d444b191e109b4c72ccb945513a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe6f4f8a090678c9eb76141cc1896ce9c032c026e768c884350cb64dace63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a3d5b07a894ad5cb3895f0d7890997de76f76e5dd6e4cf0d09096420e17a2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:58Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.603746 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ktf4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"106da5aa-5f2e-4d32-b172-4844ad6de7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f51a4a2b3b1ac421d5524a3776981e28eea2f2ae9d1e78977c96cc2f0faa894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4dpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ktf4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:58Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.615483 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.615534 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.615547 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.615566 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.615579 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:58Z","lastTransitionTime":"2026-02-03T00:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.623526 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j6wfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db2cbb1b-24c0-405a-ae34-2c1b4af6f999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1b3ce70bb8db498a796258ba1561e3e7fc260a82a2649e2e1a3e4132bcfe4d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89dk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd3875815c505e550007aa406ff23c56ac34176838363e4f894202ad69882389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89dk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j6wfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:58Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.638972 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hzk9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"039e204d-4d36-471e-990f-4eb5b4a193fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4pnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4pnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hzk9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:58Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.718389 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.718441 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.718453 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.718471 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.718483 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:58Z","lastTransitionTime":"2026-02-03T00:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.821579 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.821739 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.821768 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.821785 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.821797 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:58Z","lastTransitionTime":"2026-02-03T00:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.875046 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 07:19:30.047203034 +0000 UTC Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.907970 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:15:58 crc kubenswrapper[4798]: E0203 00:15:58.908136 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.924139 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.924199 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.924217 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.924238 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.924258 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:58Z","lastTransitionTime":"2026-02-03T00:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.929341 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d9827e9-9254-47d3-8104-063cecf114b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6167533a6a616c137b890a0878d4ca54de45315ddf8799777a78b065f287f8f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5aaaf6e417770868b0d07795a8dce7a762fff69be8f67fa8b18f2d725b3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://174155cc26bc82aa227742b9171a1bdc7e162fc52f673214b383d6fe29d6171e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78de22de18e933207990153b342930848b80ccb2a9f4111a17e195a988d4b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5985e1d80e9ba103ce9794b795647710c64f0bdf9f6b6c7ca7e3b1cc4bd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:58Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.947634 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84295e9a-e49f-4550-8b66-c13c1d9a31ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e69941f257b4b15956eceee179fe4480d582215ae076f2757f5ead357c5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a58ca61528f9b3bf28c062c16d1b924e68f3a7bc60438abbb9e6d726023b713b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75e6cbd2a5c8ecdfeb505388b2c00877ab436a5434a9ffb2c15fd1a6cbde36b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b848412f3f729b0be1246c7aa6d61d8a64e93fdbdaa5899e447dd24adb39f185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b416e7549939712a710931ff8eeed3b360c4a52f48ca3e003d48be6a9dde5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:58Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.961914 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bb16c3599d6149d6981f97d083ee866ab1f157a08b1ba9825442920b5af8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:58Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.974206 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8mqs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a80722df-c977-49e0-b1ec-a83fea1c4f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6c76bd68818c8e42c934e6a93bcffb93289d3993a975f8490ce5dc2d7db601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwh9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8mqs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:58Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:58 crc kubenswrapper[4798]: I0203 00:15:58.986401 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:58Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.000082 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da2defc8ae8b05759890f95063d7f6820ae261639188fbf19290f98cfcc080de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5cc605f6582cf6832be9a756695825774f5ce389556957d9769286d0e3640f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:58Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.011268 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4nx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f968882a-8e40-4760-b1fe-2d456390d30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a047e7a57cedd76d9c1585aa1a50f66fc1ace45b59f9d32e284d44c940a79f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swvtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4nx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:59Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.027359 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.027425 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.027440 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.027456 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.027468 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:59Z","lastTransitionTime":"2026-02-03T00:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.031708 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6602c86-f236-4772-b70f-a8b4847b95dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363ed58e5d7ec642280fa4eeb3b84af899499279e5cca2f44a3762721b45758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a062edfbf637aa7eed0473b03b10011ed3e0b8647ca701c9b67bd16abe6fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b842j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:59Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.091587 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60953d56-8dc2-4adf-96bc-078a558476e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8504c36033083a61ef0f41fbba999a505d9d387e5c23264a35125fd50e69bf2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nhpkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:59Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.115433 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71790a2-e390-400a-a288-2a3af8467047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a113a5b1daade558722136f1090129bdafac6b1d135b7d3e911ed1105b9eb37c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a50646fe0204516756577642b1e5dfe13e0e71070560b2add7b44f34c610b11c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"message\\\":\\\"212 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0203 00:15:44.080879 6212 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0203 00:15:44.081059 6212 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0203 00:15:44.080711 6212 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-b842j\\\\nI0203 00:15:44.081070 6212 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-b842j in node crc\\\\nI0203 00:15:44.081076 6212 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-b842j after 0 failed attempt(s)\\\\nI0203 00:15:44.081081 6212 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-b842j\\\\nF0203 00:15:44.080692 6212 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a113a5b1daade558722136f1090129bdafac6b1d135b7d3e911ed1105b9eb37c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T00:15:58Z\\\",\\\"message\\\":\\\"[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0203 00:15:58.026095 6420 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0203 00:15:58.026434 6420 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 00:15:58.026489 6420 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gzlj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:59Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.129878 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.130061 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.130236 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.130376 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.130489 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:59Z","lastTransitionTime":"2026-02-03T00:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.132069 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad68d98-2014-4595-bfa1-3c2d01ea9c1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ee0d4bdb8f37f7bbb7a130615b9df7a612fbcc74aadfb5eeb319935d50616d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da94f34ed139edbb39d3e777c72192e4d6d444b191e109b4c72ccb945513a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe6f4f8a090678c9eb76141cc1896ce9c032c026e768c884350cb64dace63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a3d5b07a894ad5cb3895f0d7890997de76f76e5dd6e4cf0d09096420e17a2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:59Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.147716 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ktf4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"106da5aa-5f2e-4d32-b172-4844ad6de7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f51a4a2b3b1ac421d5524a3776981e28eea2f2ae9d1e78977c96cc2f0faa894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4dpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ktf4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:59Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.159678 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j6wfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db2cbb1b-24c0-405a-ae34-2c1b4af6f999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1b3ce70bb8db498a796258ba1561e3e7fc260a82a2649e2e1a3e4132bcfe4d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89dk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd3875815c505e550007aa406ff23c56ac34176838363e4f894202ad69882389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89dk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j6wfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:59Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.173079 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hzk9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"039e204d-4d36-471e-990f-4eb5b4a193fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4pnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4pnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hzk9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:59Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.191450 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:59Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.205507 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9f9dfba8a37fa4a39f59f8b4a7cdc9c8493d91a2aef18275fae7d7989f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:59Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.224885 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:59Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.233558 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.233903 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.234086 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.234232 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.234344 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:59Z","lastTransitionTime":"2026-02-03T00:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.298019 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gzlj4_b71790a2-e390-400a-a288-2a3af8467047/ovnkube-controller/2.log" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.302179 4798 scope.go:117] "RemoveContainer" containerID="a113a5b1daade558722136f1090129bdafac6b1d135b7d3e911ed1105b9eb37c" Feb 03 00:15:59 crc kubenswrapper[4798]: E0203 00:15:59.302327 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gzlj4_openshift-ovn-kubernetes(b71790a2-e390-400a-a288-2a3af8467047)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" podUID="b71790a2-e390-400a-a288-2a3af8467047" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.316369 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8mqs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a80722df-c977-49e0-b1ec-a83fea1c4f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6c76bd68818c8e42c934e6a93bcffb93289d3993a975f8490ce5dc2d7db601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwh9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8mqs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:59Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.337625 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.337743 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.337768 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.337796 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.337814 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:59Z","lastTransitionTime":"2026-02-03T00:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.354513 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d9827e9-9254-47d3-8104-063cecf114b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6167533a6a616c137b890a0878d4ca54de45315ddf8799777a78b065f287f8f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5aaaf6e417770868b0d07795a8dce7a762fff69be8f67fa8b18f2d725b3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://174155cc26bc82aa227742b9171a1bdc7e162fc52f673214b383d6fe29d6171e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78de22de18e933207990153b342930848b80ccb2a9f4111a17e195a988d4b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5985e1d80e9ba103ce9794b795647710c64f0bdf9f6b6c7ca7e3b1cc4bd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:59Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.374850 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84295e9a-e49f-4550-8b66-c13c1d9a31ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e69941f257b4b15956eceee179fe4480d582215ae076f2757f5ead357c5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a58ca61528f9b3bf28c062c16d1b924e68f3a7bc60438abbb9e6d726023b713b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75e6cbd2a5c8ecdfeb505388b2c00877ab436a5434a9ffb2c15fd1a6cbde36b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b848412f3f729b0be1246c7aa6d61d8a64e93fdbdaa5899e447dd24adb39f185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b416e7549939712a710931ff8eeed3b360c4a52f48ca3e003d48be6a9dde5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:59Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.398027 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bb16c3599d6149d6981f97d083ee866ab1f157a08b1ba9825442920b5af8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:59Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.410025 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6602c86-f236-4772-b70f-a8b4847b95dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363ed58e5d7ec642280fa4eeb3b84af899499279e5cca2f44a3762721b45758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a062edfbf637aa7eed0473b03b10011ed3e0b8647ca701c9b67bd16abe6fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b842j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:59Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.412531 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/039e204d-4d36-471e-990f-4eb5b4a193fc-metrics-certs\") pod \"network-metrics-daemon-hzk9m\" (UID: \"039e204d-4d36-471e-990f-4eb5b4a193fc\") " pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:15:59 crc kubenswrapper[4798]: E0203 00:15:59.412987 4798 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 00:15:59 crc kubenswrapper[4798]: E0203 00:15:59.413080 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/039e204d-4d36-471e-990f-4eb5b4a193fc-metrics-certs podName:039e204d-4d36-471e-990f-4eb5b4a193fc nodeName:}" failed. No retries permitted until 2026-02-03 00:16:15.413057423 +0000 UTC m=+67.179047444 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/039e204d-4d36-471e-990f-4eb5b4a193fc-metrics-certs") pod "network-metrics-daemon-hzk9m" (UID: "039e204d-4d36-471e-990f-4eb5b4a193fc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.430088 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60953d56-8dc2-4adf-96bc-078a558476e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8504c36033083a61ef0f41fbba999a505d9d387e5c23264a35125fd50e69bf2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nhpkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:59Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.440872 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.441065 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.441227 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.441369 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.441550 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:59Z","lastTransitionTime":"2026-02-03T00:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.448225 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71790a2-e390-400a-a288-2a3af8467047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a113a5b1daade558722136f1090129bdafac6b1d135b7d3e911ed1105b9eb37c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a113a5b1daade558722136f1090129bdafac6b1d135b7d3e911ed1105b9eb37c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T00:15:58Z\\\",\\\"message\\\":\\\"[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0203 00:15:58.026095 6420 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0203 00:15:58.026434 6420 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 00:15:58.026489 6420 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gzlj4_openshift-ovn-kubernetes(b71790a2-e390-400a-a288-2a3af8467047)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gzlj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:59Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.463165 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:59Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.478444 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da2defc8ae8b05759890f95063d7f6820ae261639188fbf19290f98cfcc080de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5cc605f6582cf6832be9a756695825774f5ce389556957d9769286d0e3640f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:59Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.492400 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4nx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f968882a-8e40-4760-b1fe-2d456390d30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a047e7a57cedd76d9c1585aa1a50f66fc1ace45b59f9d32e284d44c940a79f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swvtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4nx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:59Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.515494 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad68d98-2014-4595-bfa1-3c2d01ea9c1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ee0d4bdb8f37f7bbb7a130615b9df7a612fbcc74aadfb5eeb319935d50616d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da94f34ed139edbb39d3e777c72192e4d6d444b191e109b4c72ccb945513a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe6f4f8a090678c9eb76141cc1896ce9c032c026e768c884350cb64dace63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a3d5b07a894ad5cb3895f0d7890997de76f76e5dd6e4cf0d09096420e17a2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:59Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.532781 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ktf4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"106da5aa-5f2e-4d32-b172-4844ad6de7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f51a4a2b3b1ac421d5524a3776981e28eea2f2ae9d1e78977c96cc2f0faa894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4dpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ktf4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:59Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.544407 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.544446 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.544457 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.544477 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.544491 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:59Z","lastTransitionTime":"2026-02-03T00:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.551385 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j6wfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db2cbb1b-24c0-405a-ae34-2c1b4af6f999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1b3ce70bb8db498a796258ba1561e3e7fc260a82a2649e2e1a3e4132bcfe4d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89dk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd3875815c505e550007aa406ff23c56ac34176838363e4f894202ad69882389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89dk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j6wfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:59Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.567824 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hzk9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"039e204d-4d36-471e-990f-4eb5b4a193fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4pnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4pnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hzk9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:59Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.583321 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:59Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.600171 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9f9dfba8a37fa4a39f59f8b4a7cdc9c8493d91a2aef18275fae7d7989f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:59Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.615451 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:15:59Z is after 2025-08-24T17:21:41Z" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.646834 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.646878 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.646893 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.646915 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.646930 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:59Z","lastTransitionTime":"2026-02-03T00:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.750254 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.750308 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.750326 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.750353 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.750371 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:59Z","lastTransitionTime":"2026-02-03T00:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.819362 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.819534 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:15:59 crc kubenswrapper[4798]: E0203 00:15:59.819730 4798 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 00:15:59 crc kubenswrapper[4798]: E0203 00:15:59.819726 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:16:31.819622005 +0000 UTC m=+83.585612056 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.819846 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:15:59 crc kubenswrapper[4798]: E0203 00:15:59.819872 4798 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 00:15:59 crc kubenswrapper[4798]: E0203 00:15:59.819917 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 00:16:31.819893372 +0000 UTC m=+83.585883423 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 00:15:59 crc kubenswrapper[4798]: E0203 00:15:59.820198 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 00:16:31.820183588 +0000 UTC m=+83.586173629 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.853521 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.853612 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.853636 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.853695 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.853715 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:59Z","lastTransitionTime":"2026-02-03T00:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.875827 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 04:58:56.040398715 +0000 UTC Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.907349 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.907391 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.907428 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:15:59 crc kubenswrapper[4798]: E0203 00:15:59.907524 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:15:59 crc kubenswrapper[4798]: E0203 00:15:59.907718 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:15:59 crc kubenswrapper[4798]: E0203 00:15:59.907874 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.920926 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.921056 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:15:59 crc kubenswrapper[4798]: E0203 00:15:59.921226 4798 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 00:15:59 crc kubenswrapper[4798]: E0203 00:15:59.921264 4798 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 00:15:59 crc kubenswrapper[4798]: E0203 00:15:59.921266 4798 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 00:15:59 crc kubenswrapper[4798]: E0203 00:15:59.921283 4798 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 00:15:59 crc kubenswrapper[4798]: E0203 00:15:59.921300 4798 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 00:15:59 crc kubenswrapper[4798]: E0203 00:15:59.921318 4798 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 00:15:59 crc kubenswrapper[4798]: E0203 00:15:59.921381 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 00:16:31.921356077 +0000 UTC m=+83.687346128 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 00:15:59 crc kubenswrapper[4798]: E0203 00:15:59.921420 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 00:16:31.921403208 +0000 UTC m=+83.687393269 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.957553 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.957635 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.957704 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.957742 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:15:59 crc kubenswrapper[4798]: I0203 00:15:59.957766 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:15:59Z","lastTransitionTime":"2026-02-03T00:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.061419 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.061503 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.061526 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.061559 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.061577 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:00Z","lastTransitionTime":"2026-02-03T00:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.164588 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.164708 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.164727 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.164753 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.164770 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:00Z","lastTransitionTime":"2026-02-03T00:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.267358 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.267419 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.267436 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.267462 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.267480 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:00Z","lastTransitionTime":"2026-02-03T00:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.370325 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.370371 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.370380 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.370394 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.370403 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:00Z","lastTransitionTime":"2026-02-03T00:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.472613 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.472693 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.472706 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.472726 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.472738 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:00Z","lastTransitionTime":"2026-02-03T00:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.574622 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.574731 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.574774 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.574807 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.574830 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:00Z","lastTransitionTime":"2026-02-03T00:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.677105 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.677176 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.677198 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.677232 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.677254 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:00Z","lastTransitionTime":"2026-02-03T00:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.784449 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.784489 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.784701 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.784720 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.784731 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:00Z","lastTransitionTime":"2026-02-03T00:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.876797 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 00:37:28.497087739 +0000 UTC Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.886717 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.886752 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.886763 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.886780 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.886791 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:00Z","lastTransitionTime":"2026-02-03T00:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.907753 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:16:00 crc kubenswrapper[4798]: E0203 00:16:00.907882 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.989545 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.989611 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.989619 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.989673 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:00 crc kubenswrapper[4798]: I0203 00:16:00.989684 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:00Z","lastTransitionTime":"2026-02-03T00:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.093124 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.093174 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.093193 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.093219 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.093236 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:01Z","lastTransitionTime":"2026-02-03T00:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.196135 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.196183 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.196194 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.196213 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.196226 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:01Z","lastTransitionTime":"2026-02-03T00:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.241515 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.254637 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.261216 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j6wfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db2cbb1b-24c0-405a-ae34-2c1b4af6f999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1b3ce70bb8db498a796258ba1561e3e7fc260a82a2649e2e1a3e4132bcfe4d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89dk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd3875815c505e550007aa406ff23c56ac34176838363e4f894202ad69882389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89dk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j6wfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:01Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.278553 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hzk9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"039e204d-4d36-471e-990f-4eb5b4a193fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4pnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4pnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hzk9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:01Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.299368 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.299423 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.299438 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.299458 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.299473 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:01Z","lastTransitionTime":"2026-02-03T00:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.303751 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad68d98-2014-4595-bfa1-3c2d01ea9c1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ee0d4bdb8f37f7bbb7a130615b9df7a612fbcc74aadfb5eeb319935d50616d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da94f34ed139edbb39d3e777c72192e4d6d444b191e109b4c72ccb945513a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe6f4f8a090678c9eb76141cc1896ce9c032c026e768c884350cb64dace63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a3d5b07a894ad5cb3895f0d7890997de76f76e5dd6e4cf0d09096420e17a2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:01Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.319640 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ktf4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"106da5aa-5f2e-4d32-b172-4844ad6de7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f51a4a2b3b1ac421d5524a3776981e28eea2f2ae9d1e78977c96cc2f0faa894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4dpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ktf4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:01Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.330704 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:01Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.347051 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:01Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.365814 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9f9dfba8a37fa4a39f59f8b4a7cdc9c8493d91a2aef18275fae7d7989f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:01Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.387443 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bb16c3599d6149d6981f97d083ee866ab1f157a08b1ba9825442920b5af8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:01Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.401775 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.401827 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.401863 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.401885 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.401898 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:01Z","lastTransitionTime":"2026-02-03T00:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.403446 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8mqs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a80722df-c977-49e0-b1ec-a83fea1c4f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6c76bd68818c8e42c934e6a93bcffb93289d3993a975f8490ce5dc2d7db601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwh9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8mqs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:01Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.436152 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d9827e9-9254-47d3-8104-063cecf114b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6167533a6a616c137b890a0878d4ca54de45315ddf8799777a78b065f287f8f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5aaaf6e417770868b0d07795a8dce7a762fff69be8f67fa8b18f2d725b3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://174155cc26bc82aa227742b9171a1bdc7e162fc52f673214b383d6fe29d6171e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78de22de18e933207990153b342930848b80ccb2a9f4111a17e195a988d4b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5985e1d80e9ba103ce9794b795647710c64f0bdf9f6b6c7ca7e3b1cc4bd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:01Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.455295 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84295e9a-e49f-4550-8b66-c13c1d9a31ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e69941f257b4b15956eceee179fe4480d582215ae076f2757f5ead357c5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a58ca61528f9b3bf28c062c16d1b924e68f3a7bc60438abbb9e6d726023b713b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75e6cbd2a5c8ecdfeb505388b2c00877ab436a5434a9ffb2c15fd1a6cbde36b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b848412f3f729b0be1246c7aa6d61d8a64e93fdbdaa5899e447dd24adb39f185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b416e7549939712a710931ff8eeed3b360c4a52f48ca3e003d48be6a9dde5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:01Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.472178 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da2defc8ae8b05759890f95063d7f6820ae261639188fbf19290f98cfcc080de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5cc605f6582cf6832be9a756695825774f5ce389556957d9769286d0e3640f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:01Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.486103 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4nx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f968882a-8e40-4760-b1fe-2d456390d30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a047e7a57cedd76d9c1585aa1a50f66fc1ace45b59f9d32e284d44c940a79f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swvtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4nx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:01Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.499463 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6602c86-f236-4772-b70f-a8b4847b95dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363ed58e5d7ec642280fa4eeb3b84af899499279e5cca2f44a3762721b45758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a062edfbf637aa7eed0473b03b10011ed3e0b8647ca701c9b67bd16abe6fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b842j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:01Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.504365 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.504449 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.504467 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.504489 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.504503 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:01Z","lastTransitionTime":"2026-02-03T00:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.521418 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60953d56-8dc2-4adf-96bc-078a558476e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8504c36033083a61ef0f41fbba999a505d9d387e5c23264a35125fd50e69bf2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nhpkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:01Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.545944 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71790a2-e390-400a-a288-2a3af8467047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a113a5b1daade558722136f1090129bdafac6b1d135b7d3e911ed1105b9eb37c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a113a5b1daade558722136f1090129bdafac6b1d135b7d3e911ed1105b9eb37c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T00:15:58Z\\\",\\\"message\\\":\\\"[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0203 00:15:58.026095 6420 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0203 00:15:58.026434 6420 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 00:15:58.026489 6420 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gzlj4_openshift-ovn-kubernetes(b71790a2-e390-400a-a288-2a3af8467047)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gzlj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:01Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.566902 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:01Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.606706 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.606793 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.606848 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.606870 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.606887 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:01Z","lastTransitionTime":"2026-02-03T00:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.709937 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.710003 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.710018 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.710034 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.710048 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:01Z","lastTransitionTime":"2026-02-03T00:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.812762 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.812822 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.812835 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.812852 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.812870 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:01Z","lastTransitionTime":"2026-02-03T00:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.877960 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 05:20:19.715722793 +0000 UTC Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.907340 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:16:01 crc kubenswrapper[4798]: E0203 00:16:01.907503 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.907571 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.907548 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:16:01 crc kubenswrapper[4798]: E0203 00:16:01.907884 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:16:01 crc kubenswrapper[4798]: E0203 00:16:01.907981 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.915219 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.915261 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.915276 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.915293 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:01 crc kubenswrapper[4798]: I0203 00:16:01.915305 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:01Z","lastTransitionTime":"2026-02-03T00:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.018251 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.018324 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.018349 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.018374 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.018393 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:02Z","lastTransitionTime":"2026-02-03T00:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.121728 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.121796 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.121819 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.121844 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.121888 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:02Z","lastTransitionTime":"2026-02-03T00:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.224270 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.224319 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.224330 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.224348 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.224362 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:02Z","lastTransitionTime":"2026-02-03T00:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.327391 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.327489 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.327504 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.327531 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.327550 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:02Z","lastTransitionTime":"2026-02-03T00:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.429730 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.429771 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.429780 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.429793 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.429803 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:02Z","lastTransitionTime":"2026-02-03T00:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.532859 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.533080 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.533140 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.533241 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.533312 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:02Z","lastTransitionTime":"2026-02-03T00:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.635786 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.635814 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.635822 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.635834 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.635842 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:02Z","lastTransitionTime":"2026-02-03T00:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.738248 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.738524 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.738601 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.738696 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.738793 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:02Z","lastTransitionTime":"2026-02-03T00:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.841789 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.841864 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.841881 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.841899 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.841914 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:02Z","lastTransitionTime":"2026-02-03T00:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.899805 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 02:47:46.818786882 +0000 UTC Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.907561 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:16:02 crc kubenswrapper[4798]: E0203 00:16:02.907855 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.944350 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.944391 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.944400 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.944414 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:02 crc kubenswrapper[4798]: I0203 00:16:02.944423 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:02Z","lastTransitionTime":"2026-02-03T00:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.046984 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.047017 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.047025 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.047040 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.047049 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:03Z","lastTransitionTime":"2026-02-03T00:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.149237 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.149291 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.149302 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.149320 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.149334 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:03Z","lastTransitionTime":"2026-02-03T00:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.252317 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.252399 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.252424 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.252455 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.252480 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:03Z","lastTransitionTime":"2026-02-03T00:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.355248 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.355314 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.355332 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.355381 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.355397 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:03Z","lastTransitionTime":"2026-02-03T00:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.458420 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.458492 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.458510 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.458535 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.458556 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:03Z","lastTransitionTime":"2026-02-03T00:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.562223 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.562306 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.562330 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.562364 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.562387 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:03Z","lastTransitionTime":"2026-02-03T00:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.664838 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.664923 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.664951 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.664984 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.665009 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:03Z","lastTransitionTime":"2026-02-03T00:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.768583 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.768635 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.768646 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.768687 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.768702 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:03Z","lastTransitionTime":"2026-02-03T00:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.871909 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.871999 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.872025 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.872059 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.872084 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:03Z","lastTransitionTime":"2026-02-03T00:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.900609 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 15:44:54.494541101 +0000 UTC Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.908093 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.908155 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.908104 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:16:03 crc kubenswrapper[4798]: E0203 00:16:03.908277 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:16:03 crc kubenswrapper[4798]: E0203 00:16:03.908430 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:16:03 crc kubenswrapper[4798]: E0203 00:16:03.908614 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.975498 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.975549 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.975562 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.975578 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:03 crc kubenswrapper[4798]: I0203 00:16:03.975590 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:03Z","lastTransitionTime":"2026-02-03T00:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.079031 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.079083 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.079095 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.079112 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.079123 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:04Z","lastTransitionTime":"2026-02-03T00:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.181987 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.182047 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.182063 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.182087 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.182104 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:04Z","lastTransitionTime":"2026-02-03T00:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.285501 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.285565 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.285582 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.285603 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.285620 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:04Z","lastTransitionTime":"2026-02-03T00:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.388543 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.388603 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.388620 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.388643 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.388698 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:04Z","lastTransitionTime":"2026-02-03T00:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.491973 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.492040 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.492063 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.492093 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.492120 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:04Z","lastTransitionTime":"2026-02-03T00:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.595797 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.595877 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.595900 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.595933 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.595955 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:04Z","lastTransitionTime":"2026-02-03T00:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.698632 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.698701 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.698714 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.698731 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.698743 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:04Z","lastTransitionTime":"2026-02-03T00:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.802460 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.802551 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.802575 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.802606 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.802630 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:04Z","lastTransitionTime":"2026-02-03T00:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.901615 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 10:41:04.787585094 +0000 UTC Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.906827 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.906887 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.906909 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.906934 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.906952 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:04Z","lastTransitionTime":"2026-02-03T00:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:04 crc kubenswrapper[4798]: I0203 00:16:04.907201 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:16:04 crc kubenswrapper[4798]: E0203 00:16:04.907414 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.010478 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.010552 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.010573 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.010597 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.010616 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:05Z","lastTransitionTime":"2026-02-03T00:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.113410 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.113480 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.113492 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.113511 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.113525 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:05Z","lastTransitionTime":"2026-02-03T00:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.215757 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.215817 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.215835 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.215860 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.215881 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:05Z","lastTransitionTime":"2026-02-03T00:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.274233 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.274286 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.274302 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.274324 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.274340 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:05Z","lastTransitionTime":"2026-02-03T00:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:05 crc kubenswrapper[4798]: E0203 00:16:05.296785 4798 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a6f95815-93e8-4b6e-91cb-e8db1e0a2c7b\\\",\\\"systemUUID\\\":\\\"d689d10a-78fe-472b-864b-496c283a966b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:05Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.302760 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.302824 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.302841 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.302865 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.302884 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:05Z","lastTransitionTime":"2026-02-03T00:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:05 crc kubenswrapper[4798]: E0203 00:16:05.324907 4798 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a6f95815-93e8-4b6e-91cb-e8db1e0a2c7b\\\",\\\"systemUUID\\\":\\\"d689d10a-78fe-472b-864b-496c283a966b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:05Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.330080 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.330141 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.330165 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.330194 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.330218 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:05Z","lastTransitionTime":"2026-02-03T00:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:05 crc kubenswrapper[4798]: E0203 00:16:05.349149 4798 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a6f95815-93e8-4b6e-91cb-e8db1e0a2c7b\\\",\\\"systemUUID\\\":\\\"d689d10a-78fe-472b-864b-496c283a966b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:05Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.353616 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.353799 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.353822 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.353850 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.353869 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:05Z","lastTransitionTime":"2026-02-03T00:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:05 crc kubenswrapper[4798]: E0203 00:16:05.369063 4798 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a6f95815-93e8-4b6e-91cb-e8db1e0a2c7b\\\",\\\"systemUUID\\\":\\\"d689d10a-78fe-472b-864b-496c283a966b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:05Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.374465 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.374556 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.374583 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.374616 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.374641 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:05Z","lastTransitionTime":"2026-02-03T00:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:05 crc kubenswrapper[4798]: E0203 00:16:05.391217 4798 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a6f95815-93e8-4b6e-91cb-e8db1e0a2c7b\\\",\\\"systemUUID\\\":\\\"d689d10a-78fe-472b-864b-496c283a966b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:05Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:05 crc kubenswrapper[4798]: E0203 00:16:05.391366 4798 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.393336 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.393430 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.393448 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.393473 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.393491 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:05Z","lastTransitionTime":"2026-02-03T00:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.496273 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.496328 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.496346 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.496368 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.496387 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:05Z","lastTransitionTime":"2026-02-03T00:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.599950 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.600009 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.600028 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.600052 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.600070 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:05Z","lastTransitionTime":"2026-02-03T00:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.703274 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.703337 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.703354 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.703380 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.703397 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:05Z","lastTransitionTime":"2026-02-03T00:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.806412 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.806452 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.806461 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.806475 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.806486 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:05Z","lastTransitionTime":"2026-02-03T00:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.902109 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 10:03:14.190714913 +0000 UTC Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.907439 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.907446 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.907587 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:16:05 crc kubenswrapper[4798]: E0203 00:16:05.907931 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:16:05 crc kubenswrapper[4798]: E0203 00:16:05.908221 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:16:05 crc kubenswrapper[4798]: E0203 00:16:05.908363 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.909627 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.910008 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.910074 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.910125 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:05 crc kubenswrapper[4798]: I0203 00:16:05.910149 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:05Z","lastTransitionTime":"2026-02-03T00:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.013355 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.013463 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.013487 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.013514 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.013534 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:06Z","lastTransitionTime":"2026-02-03T00:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.116509 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.116568 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.116585 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.116610 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.116628 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:06Z","lastTransitionTime":"2026-02-03T00:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.219524 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.219590 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.219617 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.219648 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.219711 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:06Z","lastTransitionTime":"2026-02-03T00:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.322150 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.322209 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.322226 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.322251 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.322270 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:06Z","lastTransitionTime":"2026-02-03T00:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.426210 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.426259 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.426271 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.426290 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.426301 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:06Z","lastTransitionTime":"2026-02-03T00:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.529869 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.529938 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.529961 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.529986 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.530005 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:06Z","lastTransitionTime":"2026-02-03T00:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.632709 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.632773 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.632982 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.633011 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.633033 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:06Z","lastTransitionTime":"2026-02-03T00:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.736423 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.736503 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.736525 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.736562 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.736598 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:06Z","lastTransitionTime":"2026-02-03T00:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.839631 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.839695 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.839705 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.839720 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.839730 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:06Z","lastTransitionTime":"2026-02-03T00:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.902870 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 22:26:07.074792377 +0000 UTC Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.907247 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:16:06 crc kubenswrapper[4798]: E0203 00:16:06.907424 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.943024 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.943095 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.943113 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.943137 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:06 crc kubenswrapper[4798]: I0203 00:16:06.943156 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:06Z","lastTransitionTime":"2026-02-03T00:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.049565 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.049705 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.049736 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.049761 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.049779 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:07Z","lastTransitionTime":"2026-02-03T00:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.153054 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.153121 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.153145 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.153172 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.153199 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:07Z","lastTransitionTime":"2026-02-03T00:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.256524 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.256571 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.256588 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.256612 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.256628 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:07Z","lastTransitionTime":"2026-02-03T00:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.359715 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.359775 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.359796 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.359819 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.359836 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:07Z","lastTransitionTime":"2026-02-03T00:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.463368 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.463404 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.463416 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.463432 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.463443 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:07Z","lastTransitionTime":"2026-02-03T00:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.565492 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.565538 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.565554 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.565573 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.565586 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:07Z","lastTransitionTime":"2026-02-03T00:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.669137 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.669207 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.669229 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.669259 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.669282 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:07Z","lastTransitionTime":"2026-02-03T00:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.772295 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.772370 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.772388 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.772412 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.772428 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:07Z","lastTransitionTime":"2026-02-03T00:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.875104 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.875178 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.875203 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.875232 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.875256 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:07Z","lastTransitionTime":"2026-02-03T00:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.903879 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 16:14:42.239922333 +0000 UTC Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.907171 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.907237 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:16:07 crc kubenswrapper[4798]: E0203 00:16:07.907360 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.907180 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:16:07 crc kubenswrapper[4798]: E0203 00:16:07.907529 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:16:07 crc kubenswrapper[4798]: E0203 00:16:07.907643 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.977500 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.977544 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.977563 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.977586 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:07 crc kubenswrapper[4798]: I0203 00:16:07.977603 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:07Z","lastTransitionTime":"2026-02-03T00:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.079794 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.079877 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.079901 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.079929 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.079946 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:08Z","lastTransitionTime":"2026-02-03T00:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.183520 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.183590 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.183612 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.183639 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.183697 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:08Z","lastTransitionTime":"2026-02-03T00:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.287468 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.287549 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.287572 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.287600 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.287620 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:08Z","lastTransitionTime":"2026-02-03T00:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.390214 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.390280 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.390299 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.390324 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.390343 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:08Z","lastTransitionTime":"2026-02-03T00:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.494190 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.494292 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.494310 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.494333 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.494350 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:08Z","lastTransitionTime":"2026-02-03T00:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.598877 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.598957 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.598981 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.599008 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.599031 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:08Z","lastTransitionTime":"2026-02-03T00:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.702035 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.702113 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.702136 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.702167 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.702190 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:08Z","lastTransitionTime":"2026-02-03T00:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.805854 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.805932 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.805959 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.806000 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.806031 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:08Z","lastTransitionTime":"2026-02-03T00:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.904393 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 08:31:47.711446608 +0000 UTC Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.907347 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:16:08 crc kubenswrapper[4798]: E0203 00:16:08.907739 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.909233 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.909313 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.909339 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.909375 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.909409 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:08Z","lastTransitionTime":"2026-02-03T00:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.928125 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9f9dfba8a37fa4a39f59f8b4a7cdc9c8493d91a2aef18275fae7d7989f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:08Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.952199 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:08Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.974294 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:08Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:08 crc kubenswrapper[4798]: I0203 00:16:08.993002 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84295e9a-e49f-4550-8b66-c13c1d9a31ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e69941f257b4b15956eceee179fe4480d582215ae076f2757f5ead357c5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a58ca61528f9b3bf28c062c16d1b924e68f3a7bc60438abbb9e6d726023b713b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75e6cbd2a5c8ecdfeb505388b2c00877ab436a5434a9ffb2c15fd1a6cbde36b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b848412f3f729b0be1246c7aa6d61d8a64e93fdbdaa5899e447dd24adb39f185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b416e7549939712a710931ff8eeed3b360c4a52f48ca3e003d48be6a9dde5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:08Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.008754 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bb16c3599d6149d6981f97d083ee866ab1f157a08b1ba9825442920b5af8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:09Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.013372 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.013418 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.013432 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.013455 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.013470 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:09Z","lastTransitionTime":"2026-02-03T00:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.027943 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8mqs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a80722df-c977-49e0-b1ec-a83fea1c4f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6c76bd68818c8e42c934e6a93bcffb93289d3993a975f8490ce5dc2d7db601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwh9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8mqs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:09Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.055861 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d9827e9-9254-47d3-8104-063cecf114b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6167533a6a616c137b890a0878d4ca54de45315ddf8799777a78b065f287f8f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5aaaf6e417770868b0d07795a8dce7a762fff69be8f67fa8b18f2d725b3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://174155cc26bc82aa227742b9171a1bdc7e162fc52f673214b383d6fe29d6171e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78de22de18e933207990153b342930848b80ccb2a9f4111a17e195a988d4b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5985e1d80e9ba103ce9794b795647710c64f0bdf9f6b6c7ca7e3b1cc4bd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:09Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.081337 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:09Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.104088 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da2defc8ae8b05759890f95063d7f6820ae261639188fbf19290f98cfcc080de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5cc605f6582cf6832be9a756695825774f5ce389556957d9769286d0e3640f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:09Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.117521 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.117920 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.118120 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.118296 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.118478 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:09Z","lastTransitionTime":"2026-02-03T00:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.121634 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4nx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f968882a-8e40-4760-b1fe-2d456390d30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a047e7a57cedd76d9c1585aa1a50f66fc1ace45b59f9d32e284d44c940a79f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swvtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4nx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:09Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.139822 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6602c86-f236-4772-b70f-a8b4847b95dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363ed58e5d7ec642280fa4eeb3b84af899499279e5cca2f44a3762721b45758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a062edfbf637aa7eed0473b03b10011ed3e0b8647ca701c9b67bd16abe6fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b842j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:09Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.163715 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60953d56-8dc2-4adf-96bc-078a558476e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8504c36033083a61ef0f41fbba999a505d9d387e5c23264a35125fd50e69bf2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nhpkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:09Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.196357 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71790a2-e390-400a-a288-2a3af8467047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a113a5b1daade558722136f1090129bdafac6b1d135b7d3e911ed1105b9eb37c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a113a5b1daade558722136f1090129bdafac6b1d135b7d3e911ed1105b9eb37c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T00:15:58Z\\\",\\\"message\\\":\\\"[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0203 00:15:58.026095 6420 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0203 00:15:58.026434 6420 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 00:15:58.026489 6420 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gzlj4_openshift-ovn-kubernetes(b71790a2-e390-400a-a288-2a3af8467047)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gzlj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:09Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.213885 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2e011-9d28-486c-8271-62832c2c6324\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bebae4dacf97b2b134bfc60501e61d43918c37ae414740771908db0afe0195ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b3142e9893f22bf03ac0f2935798c4cfbfadea4911f655023634b4cf61a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6400ae64d5cc6c1f49d285bf4274e92e3905ce52d43e3938f6c99e583f90df21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ece09a4762670459423ed39e563099723cdbb52f78b4edaae6f80ec5eb33624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ece09a4762670459423ed39e563099723cdbb52f78b4edaae6f80ec5eb33624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:09Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.221345 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.221413 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.221433 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.221463 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.221484 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:09Z","lastTransitionTime":"2026-02-03T00:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.235852 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ktf4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"106da5aa-5f2e-4d32-b172-4844ad6de7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f51a4a2b3b1ac421d5524a3776981e28eea2f2ae9d1e78977c96cc2f0faa894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4dpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ktf4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:09Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.259090 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j6wfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db2cbb1b-24c0-405a-ae34-2c1b4af6f999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1b3ce70bb8db498a796258ba1561e3e7fc260a82a2649e2e1a3e4132bcfe4d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89dk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd3875815c505e550007aa406ff23c56ac34176838363e4f894202ad69882389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89dk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j6wfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:09Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.277033 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hzk9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"039e204d-4d36-471e-990f-4eb5b4a193fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4pnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4pnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hzk9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:09Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.300455 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad68d98-2014-4595-bfa1-3c2d01ea9c1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ee0d4bdb8f37f7bbb7a130615b9df7a612fbcc74aadfb5eeb319935d50616d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da94f34ed139edbb39d3e777c72192e4d6d444b191e109b4c72ccb945513a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe6f4f8a090678c9eb76141cc1896ce9c032c026e768c884350cb64dace63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a3d5b07a894ad5cb3895f0d7890997de76f76e5dd6e4cf0d09096420e17a2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:09Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.324958 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.325006 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.325017 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.325030 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.325040 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:09Z","lastTransitionTime":"2026-02-03T00:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.428309 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.428362 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.428375 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.428392 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.428406 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:09Z","lastTransitionTime":"2026-02-03T00:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.532334 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.532377 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.532390 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.532408 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.532419 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:09Z","lastTransitionTime":"2026-02-03T00:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.636294 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.636386 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.636412 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.636448 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.636471 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:09Z","lastTransitionTime":"2026-02-03T00:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.738939 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.739008 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.739020 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.739060 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.739072 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:09Z","lastTransitionTime":"2026-02-03T00:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.842898 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.842997 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.843017 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.843040 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.843058 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:09Z","lastTransitionTime":"2026-02-03T00:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.905100 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 08:46:40.291489665 +0000 UTC Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.907606 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.907624 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.907763 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:16:09 crc kubenswrapper[4798]: E0203 00:16:09.907996 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:16:09 crc kubenswrapper[4798]: E0203 00:16:09.908378 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:16:09 crc kubenswrapper[4798]: E0203 00:16:09.909044 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.947211 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.947270 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.947285 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.947312 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:09 crc kubenswrapper[4798]: I0203 00:16:09.947329 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:09Z","lastTransitionTime":"2026-02-03T00:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.051213 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.051270 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.051291 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.051314 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.051332 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:10Z","lastTransitionTime":"2026-02-03T00:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.154358 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.154409 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.154425 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.154447 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.154466 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:10Z","lastTransitionTime":"2026-02-03T00:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.258295 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.258366 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.258382 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.258405 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.258423 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:10Z","lastTransitionTime":"2026-02-03T00:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.362040 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.362139 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.362164 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.362194 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.362215 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:10Z","lastTransitionTime":"2026-02-03T00:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.466041 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.466109 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.466121 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.466136 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.466147 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:10Z","lastTransitionTime":"2026-02-03T00:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.569766 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.569829 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.569849 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.569925 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.569948 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:10Z","lastTransitionTime":"2026-02-03T00:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.672889 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.672931 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.672941 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.672958 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.672971 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:10Z","lastTransitionTime":"2026-02-03T00:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.776614 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.776709 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.776729 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.776755 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.776774 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:10Z","lastTransitionTime":"2026-02-03T00:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.880888 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.880972 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.880989 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.881014 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.881033 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:10Z","lastTransitionTime":"2026-02-03T00:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.905831 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 16:58:06.28806079 +0000 UTC Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.908312 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:16:10 crc kubenswrapper[4798]: E0203 00:16:10.908544 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.983984 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.984050 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.984066 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.984091 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:10 crc kubenswrapper[4798]: I0203 00:16:10.984107 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:10Z","lastTransitionTime":"2026-02-03T00:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.087540 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.087638 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.087683 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.087709 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.087726 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:11Z","lastTransitionTime":"2026-02-03T00:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.190458 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.190506 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.190522 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.190540 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.190552 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:11Z","lastTransitionTime":"2026-02-03T00:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.293965 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.294028 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.294037 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.294059 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.294071 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:11Z","lastTransitionTime":"2026-02-03T00:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.398235 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.398320 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.398342 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.398377 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.398449 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:11Z","lastTransitionTime":"2026-02-03T00:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.502355 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.502400 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.502412 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.502428 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.502443 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:11Z","lastTransitionTime":"2026-02-03T00:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.605321 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.605372 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.605384 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.605404 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.605420 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:11Z","lastTransitionTime":"2026-02-03T00:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.708163 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.708265 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.708289 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.708320 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.708342 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:11Z","lastTransitionTime":"2026-02-03T00:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.811308 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.811388 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.811415 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.811445 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.811466 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:11Z","lastTransitionTime":"2026-02-03T00:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.906243 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 20:14:55.212635601 +0000 UTC Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.907607 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.907759 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.907684 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:16:11 crc kubenswrapper[4798]: E0203 00:16:11.907944 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:16:11 crc kubenswrapper[4798]: E0203 00:16:11.908641 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:16:11 crc kubenswrapper[4798]: E0203 00:16:11.908735 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.909313 4798 scope.go:117] "RemoveContainer" containerID="a113a5b1daade558722136f1090129bdafac6b1d135b7d3e911ed1105b9eb37c" Feb 03 00:16:11 crc kubenswrapper[4798]: E0203 00:16:11.909813 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gzlj4_openshift-ovn-kubernetes(b71790a2-e390-400a-a288-2a3af8467047)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" podUID="b71790a2-e390-400a-a288-2a3af8467047" Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.914292 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.914339 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.914351 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.914374 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:11 crc kubenswrapper[4798]: I0203 00:16:11.914389 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:11Z","lastTransitionTime":"2026-02-03T00:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.017110 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.017177 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.017187 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.017205 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.017215 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:12Z","lastTransitionTime":"2026-02-03T00:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.120225 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.120277 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.120290 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.120310 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.120325 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:12Z","lastTransitionTime":"2026-02-03T00:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.223621 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.223745 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.223812 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.223852 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.223878 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:12Z","lastTransitionTime":"2026-02-03T00:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.327470 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.327555 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.327568 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.327592 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.327605 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:12Z","lastTransitionTime":"2026-02-03T00:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.432146 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.432217 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.432235 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.432264 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.432282 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:12Z","lastTransitionTime":"2026-02-03T00:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.535048 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.535106 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.535123 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.535143 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.535156 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:12Z","lastTransitionTime":"2026-02-03T00:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.638433 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.638489 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.638502 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.638524 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.638543 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:12Z","lastTransitionTime":"2026-02-03T00:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.741611 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.742034 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.742175 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.742318 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.742449 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:12Z","lastTransitionTime":"2026-02-03T00:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.846564 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.846637 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.846676 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.846701 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.846716 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:12Z","lastTransitionTime":"2026-02-03T00:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.907145 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 20:59:58.656996825 +0000 UTC Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.908278 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:16:12 crc kubenswrapper[4798]: E0203 00:16:12.908859 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.951613 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.951708 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.951725 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.951748 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:12 crc kubenswrapper[4798]: I0203 00:16:12.951768 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:12Z","lastTransitionTime":"2026-02-03T00:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.055212 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.055274 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.055290 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.055314 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.055333 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:13Z","lastTransitionTime":"2026-02-03T00:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.157740 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.157793 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.157806 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.157823 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.157838 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:13Z","lastTransitionTime":"2026-02-03T00:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.260935 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.260984 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.260996 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.261012 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.261025 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:13Z","lastTransitionTime":"2026-02-03T00:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.364240 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.364302 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.364320 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.364338 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.364351 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:13Z","lastTransitionTime":"2026-02-03T00:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.467396 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.467449 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.467462 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.467483 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.467498 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:13Z","lastTransitionTime":"2026-02-03T00:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.572595 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.572682 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.572698 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.572723 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.572739 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:13Z","lastTransitionTime":"2026-02-03T00:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.675201 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.675273 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.675285 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.675301 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.675315 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:13Z","lastTransitionTime":"2026-02-03T00:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.778063 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.778111 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.778121 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.778134 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.778147 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:13Z","lastTransitionTime":"2026-02-03T00:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.880261 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.880321 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.880338 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.880361 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.880381 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:13Z","lastTransitionTime":"2026-02-03T00:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.907849 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.907900 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.907941 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:16:13 crc kubenswrapper[4798]: E0203 00:16:13.907998 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.908040 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 12:10:03.373531134 +0000 UTC Feb 03 00:16:13 crc kubenswrapper[4798]: E0203 00:16:13.908076 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:16:13 crc kubenswrapper[4798]: E0203 00:16:13.908237 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.983130 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.983210 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.983226 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.983254 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:13 crc kubenswrapper[4798]: I0203 00:16:13.983272 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:13Z","lastTransitionTime":"2026-02-03T00:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.085627 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.085682 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.085690 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.085703 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.085712 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:14Z","lastTransitionTime":"2026-02-03T00:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.193198 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.193302 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.193321 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.193342 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.193356 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:14Z","lastTransitionTime":"2026-02-03T00:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.295129 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.295172 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.295184 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.295200 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.295212 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:14Z","lastTransitionTime":"2026-02-03T00:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.398271 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.398318 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.398330 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.398351 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.398368 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:14Z","lastTransitionTime":"2026-02-03T00:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.500757 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.500812 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.500823 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.500837 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.500848 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:14Z","lastTransitionTime":"2026-02-03T00:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.604016 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.604121 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.604142 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.604186 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.604208 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:14Z","lastTransitionTime":"2026-02-03T00:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.706583 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.706618 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.706627 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.706640 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.706665 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:14Z","lastTransitionTime":"2026-02-03T00:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.810060 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.810191 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.810201 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.810217 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.810228 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:14Z","lastTransitionTime":"2026-02-03T00:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.908076 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:16:14 crc kubenswrapper[4798]: E0203 00:16:14.908294 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.908347 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 06:27:09.585544002 +0000 UTC Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.915241 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.915274 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.915283 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.915298 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:14 crc kubenswrapper[4798]: I0203 00:16:14.915310 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:14Z","lastTransitionTime":"2026-02-03T00:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.017946 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.018000 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.018012 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.018027 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.018040 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:15Z","lastTransitionTime":"2026-02-03T00:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.121261 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.121802 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.121814 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.121835 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.121850 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:15Z","lastTransitionTime":"2026-02-03T00:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.223709 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.223754 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.223766 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.223783 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.223795 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:15Z","lastTransitionTime":"2026-02-03T00:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.326890 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.326962 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.326998 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.327043 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.327065 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:15Z","lastTransitionTime":"2026-02-03T00:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.430142 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.430181 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.430193 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.430206 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.430214 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:15Z","lastTransitionTime":"2026-02-03T00:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.451415 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/039e204d-4d36-471e-990f-4eb5b4a193fc-metrics-certs\") pod \"network-metrics-daemon-hzk9m\" (UID: \"039e204d-4d36-471e-990f-4eb5b4a193fc\") " pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:16:15 crc kubenswrapper[4798]: E0203 00:16:15.451663 4798 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 00:16:15 crc kubenswrapper[4798]: E0203 00:16:15.451754 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/039e204d-4d36-471e-990f-4eb5b4a193fc-metrics-certs podName:039e204d-4d36-471e-990f-4eb5b4a193fc nodeName:}" failed. No retries permitted until 2026-02-03 00:16:47.451731025 +0000 UTC m=+99.217721076 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/039e204d-4d36-471e-990f-4eb5b4a193fc-metrics-certs") pod "network-metrics-daemon-hzk9m" (UID: "039e204d-4d36-471e-990f-4eb5b4a193fc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.533388 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.533437 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.533450 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.533472 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.533487 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:15Z","lastTransitionTime":"2026-02-03T00:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.635623 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.635722 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.635735 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.635752 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.635765 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:15Z","lastTransitionTime":"2026-02-03T00:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.705386 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.705434 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.705446 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.705465 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.705478 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:15Z","lastTransitionTime":"2026-02-03T00:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:15 crc kubenswrapper[4798]: E0203 00:16:15.717015 4798 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a6f95815-93e8-4b6e-91cb-e8db1e0a2c7b\\\",\\\"systemUUID\\\":\\\"d689d10a-78fe-472b-864b-496c283a966b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:15Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.721536 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.721575 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.721593 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.721613 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.721626 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:15Z","lastTransitionTime":"2026-02-03T00:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:15 crc kubenswrapper[4798]: E0203 00:16:15.732761 4798 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a6f95815-93e8-4b6e-91cb-e8db1e0a2c7b\\\",\\\"systemUUID\\\":\\\"d689d10a-78fe-472b-864b-496c283a966b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:15Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.736183 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.736221 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.736231 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.736244 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.736255 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:15Z","lastTransitionTime":"2026-02-03T00:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:15 crc kubenswrapper[4798]: E0203 00:16:15.746946 4798 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a6f95815-93e8-4b6e-91cb-e8db1e0a2c7b\\\",\\\"systemUUID\\\":\\\"d689d10a-78fe-472b-864b-496c283a966b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:15Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.750181 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.750232 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.750245 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.750265 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.750278 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:15Z","lastTransitionTime":"2026-02-03T00:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:15 crc kubenswrapper[4798]: E0203 00:16:15.760956 4798 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a6f95815-93e8-4b6e-91cb-e8db1e0a2c7b\\\",\\\"systemUUID\\\":\\\"d689d10a-78fe-472b-864b-496c283a966b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:15Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.764826 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.764865 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.764875 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.764890 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.764904 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:15Z","lastTransitionTime":"2026-02-03T00:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:15 crc kubenswrapper[4798]: E0203 00:16:15.776005 4798 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a6f95815-93e8-4b6e-91cb-e8db1e0a2c7b\\\",\\\"systemUUID\\\":\\\"d689d10a-78fe-472b-864b-496c283a966b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:15Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:15 crc kubenswrapper[4798]: E0203 00:16:15.776134 4798 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.777722 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.777760 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.777772 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.777786 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.777795 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:15Z","lastTransitionTime":"2026-02-03T00:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.880115 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.880148 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.880156 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.880169 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.880177 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:15Z","lastTransitionTime":"2026-02-03T00:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.908126 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:16:15 crc kubenswrapper[4798]: E0203 00:16:15.908241 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.908292 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:16:15 crc kubenswrapper[4798]: E0203 00:16:15.908333 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.908367 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:16:15 crc kubenswrapper[4798]: E0203 00:16:15.908433 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.908847 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 00:07:41.015474942 +0000 UTC Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.982749 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.982787 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.982796 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.982809 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:15 crc kubenswrapper[4798]: I0203 00:16:15.982832 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:15Z","lastTransitionTime":"2026-02-03T00:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.085637 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.085729 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.085752 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.085783 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.085806 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:16Z","lastTransitionTime":"2026-02-03T00:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.187949 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.187984 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.187995 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.188010 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.188022 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:16Z","lastTransitionTime":"2026-02-03T00:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.290032 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.290113 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.290130 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.290153 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.290172 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:16Z","lastTransitionTime":"2026-02-03T00:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.393433 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.393475 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.393488 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.393504 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.393516 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:16Z","lastTransitionTime":"2026-02-03T00:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.495667 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.495724 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.495737 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.495754 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.495765 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:16Z","lastTransitionTime":"2026-02-03T00:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.598005 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.598060 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.598072 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.598086 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.598094 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:16Z","lastTransitionTime":"2026-02-03T00:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.703282 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.703320 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.703332 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.703347 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.703359 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:16Z","lastTransitionTime":"2026-02-03T00:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.805836 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.805888 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.805901 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.805920 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.805932 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:16Z","lastTransitionTime":"2026-02-03T00:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.907314 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:16:16 crc kubenswrapper[4798]: E0203 00:16:16.907443 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.909022 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 01:34:04.592929537 +0000 UTC Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.915161 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.915207 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.915218 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.915234 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:16 crc kubenswrapper[4798]: I0203 00:16:16.915245 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:16Z","lastTransitionTime":"2026-02-03T00:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.017228 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.017272 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.017283 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.017300 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.017311 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:17Z","lastTransitionTime":"2026-02-03T00:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.119713 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.119758 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.119770 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.119785 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.119796 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:17Z","lastTransitionTime":"2026-02-03T00:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.222082 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.222138 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.222149 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.222166 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.222180 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:17Z","lastTransitionTime":"2026-02-03T00:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.324564 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.324598 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.324606 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.324622 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.324633 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:17Z","lastTransitionTime":"2026-02-03T00:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.427309 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.427349 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.427362 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.427379 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.427392 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:17Z","lastTransitionTime":"2026-02-03T00:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.529559 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.529618 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.529630 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.529646 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.529673 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:17Z","lastTransitionTime":"2026-02-03T00:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.632636 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.632713 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.632726 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.632742 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.632756 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:17Z","lastTransitionTime":"2026-02-03T00:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.734822 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.734868 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.734879 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.734896 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.734910 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:17Z","lastTransitionTime":"2026-02-03T00:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.837145 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.837172 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.837180 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.837193 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.837201 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:17Z","lastTransitionTime":"2026-02-03T00:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.908208 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:16:17 crc kubenswrapper[4798]: E0203 00:16:17.908332 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.908347 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.908408 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:16:17 crc kubenswrapper[4798]: E0203 00:16:17.908535 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:16:17 crc kubenswrapper[4798]: E0203 00:16:17.908746 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.909133 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 17:34:51.589230215 +0000 UTC Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.939799 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.939853 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.939872 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.939894 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:17 crc kubenswrapper[4798]: I0203 00:16:17.939910 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:17Z","lastTransitionTime":"2026-02-03T00:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.042337 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.042383 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.042395 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.042414 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.042427 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:18Z","lastTransitionTime":"2026-02-03T00:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.144689 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.144722 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.144732 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.144750 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.144762 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:18Z","lastTransitionTime":"2026-02-03T00:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.247138 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.247405 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.247523 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.247593 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.247672 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:18Z","lastTransitionTime":"2026-02-03T00:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.349615 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.349680 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.349695 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.349712 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.349722 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:18Z","lastTransitionTime":"2026-02-03T00:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.372772 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ktf4c_106da5aa-5f2e-4d32-b172-4844ad6de7f6/kube-multus/0.log" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.372864 4798 generic.go:334] "Generic (PLEG): container finished" podID="106da5aa-5f2e-4d32-b172-4844ad6de7f6" containerID="8f51a4a2b3b1ac421d5524a3776981e28eea2f2ae9d1e78977c96cc2f0faa894" exitCode=1 Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.372895 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ktf4c" event={"ID":"106da5aa-5f2e-4d32-b172-4844ad6de7f6","Type":"ContainerDied","Data":"8f51a4a2b3b1ac421d5524a3776981e28eea2f2ae9d1e78977c96cc2f0faa894"} Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.373298 4798 scope.go:117] "RemoveContainer" containerID="8f51a4a2b3b1ac421d5524a3776981e28eea2f2ae9d1e78977c96cc2f0faa894" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.394821 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad68d98-2014-4595-bfa1-3c2d01ea9c1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ee0d4bdb8f37f7bbb7a130615b9df7a612fbcc74aadfb5eeb319935d50616d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da94f34ed139edbb39d3e777c72192e4d6d444b191e109b4c72ccb945513a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe6f4f8a090678c9eb76141cc1896ce9c032c026e768c884350cb64dace63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a3d5b07a894ad5cb3895f0d7890997de76f76e5dd6e4cf0d09096420e17a2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:18Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.406991 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ktf4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"106da5aa-5f2e-4d32-b172-4844ad6de7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f51a4a2b3b1ac421d5524a3776981e28eea2f2ae9d1e78977c96cc2f0faa894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f51a4a2b3b1ac421d5524a3776981e28eea2f2ae9d1e78977c96cc2f0faa894\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T00:16:17Z\\\",\\\"message\\\":\\\"2026-02-03T00:15:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_875157a4-3182-4f2f-8108-7a7fc380944d\\\\n2026-02-03T00:15:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_875157a4-3182-4f2f-8108-7a7fc380944d to /host/opt/cni/bin/\\\\n2026-02-03T00:15:32Z [verbose] multus-daemon started\\\\n2026-02-03T00:15:32Z [verbose] Readiness Indicator file check\\\\n2026-02-03T00:16:17Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4dpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ktf4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:18Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.416669 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j6wfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db2cbb1b-24c0-405a-ae34-2c1b4af6f999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1b3ce70bb8db498a796258ba1561e3e7fc260a82a2649e2e1a3e4132bcfe4d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89dk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd3875815c505e550007aa406ff23c56ac34176838363e4f894202ad69882389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89dk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j6wfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:18Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.426083 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hzk9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"039e204d-4d36-471e-990f-4eb5b4a193fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4pnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4pnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hzk9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:18Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.437806 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:18Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.451643 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.451708 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.451721 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.451762 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.451778 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:18Z","lastTransitionTime":"2026-02-03T00:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.453238 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9f9dfba8a37fa4a39f59f8b4a7cdc9c8493d91a2aef18275fae7d7989f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:18Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.464673 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:18Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.480765 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d9827e9-9254-47d3-8104-063cecf114b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6167533a6a616c137b890a0878d4ca54de45315ddf8799777a78b065f287f8f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5aaaf6e417770868b0d07795a8dce7a762fff69be8f67fa8b18f2d725b3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://174155cc26bc82aa227742b9171a1bdc7e162fc52f673214b383d6fe29d6171e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78de22de18e933207990153b342930848b80ccb2a9f4111a17e195a988d4b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5985e1d80e9ba103ce9794b795647710c64f0bdf9f6b6c7ca7e3b1cc4bd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:18Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.493027 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84295e9a-e49f-4550-8b66-c13c1d9a31ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e69941f257b4b15956eceee179fe4480d582215ae076f2757f5ead357c5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a58ca61528f9b3bf28c062c16d1b924e68f3a7bc60438abbb9e6d726023b713b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75e6cbd2a5c8ecdfeb505388b2c00877ab436a5434a9ffb2c15fd1a6cbde36b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b848412f3f729b0be1246c7aa6d61d8a64e93fdbdaa5899e447dd24adb39f185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b416e7549939712a710931ff8eeed3b360c4a52f48ca3e003d48be6a9dde5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:18Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.506340 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bb16c3599d6149d6981f97d083ee866ab1f157a08b1ba9825442920b5af8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:18Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.513898 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8mqs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a80722df-c977-49e0-b1ec-a83fea1c4f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6c76bd68818c8e42c934e6a93bcffb93289d3993a975f8490ce5dc2d7db601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwh9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8mqs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:18Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.527222 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60953d56-8dc2-4adf-96bc-078a558476e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8504c36033083a61ef0f41fbba999a505d9d387e5c23264a35125fd50e69bf2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nhpkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:18Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.544877 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71790a2-e390-400a-a288-2a3af8467047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a113a5b1daade558722136f1090129bdafac6b1d135b7d3e911ed1105b9eb37c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a113a5b1daade558722136f1090129bdafac6b1d135b7d3e911ed1105b9eb37c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T00:15:58Z\\\",\\\"message\\\":\\\"[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0203 00:15:58.026095 6420 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0203 00:15:58.026434 6420 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 00:15:58.026489 6420 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gzlj4_openshift-ovn-kubernetes(b71790a2-e390-400a-a288-2a3af8467047)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gzlj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:18Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.554941 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.554978 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.554991 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.555009 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.555021 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:18Z","lastTransitionTime":"2026-02-03T00:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.555896 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2e011-9d28-486c-8271-62832c2c6324\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bebae4dacf97b2b134bfc60501e61d43918c37ae414740771908db0afe0195ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b3142e9893f22bf03ac0f2935798c4cfbfadea4911f655023634b4cf61a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6400ae64d5cc6c1f49d285bf4274e92e3905ce52d43e3938f6c99e583f90df21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ece09a4762670459423ed39e563099723cdbb52f78b4edaae6f80ec5eb33624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ece09a4762670459423ed39e563099723cdbb52f78b4edaae6f80ec5eb33624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:18Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.569050 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:18Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.580340 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da2defc8ae8b05759890f95063d7f6820ae261639188fbf19290f98cfcc080de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5cc605f6582cf6832be9a756695825774f5ce389556957d9769286d0e3640f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:18Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.588906 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4nx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f968882a-8e40-4760-b1fe-2d456390d30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a047e7a57cedd76d9c1585aa1a50f66fc1ace45b59f9d32e284d44c940a79f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swvtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4nx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:18Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.601545 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6602c86-f236-4772-b70f-a8b4847b95dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363ed58e5d7ec642280fa4eeb3b84af899499279e5cca2f44a3762721b45758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a062edfbf637aa7eed0473b03b10011ed3e0b8647ca701c9b67bd16abe6fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b842j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:18Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.657038 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.657071 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.657083 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.657098 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.657110 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:18Z","lastTransitionTime":"2026-02-03T00:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.759444 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.759508 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.759533 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.759561 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.759582 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:18Z","lastTransitionTime":"2026-02-03T00:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.862417 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.862459 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.862472 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.862487 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.862499 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:18Z","lastTransitionTime":"2026-02-03T00:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.907823 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:16:18 crc kubenswrapper[4798]: E0203 00:16:18.907948 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.909343 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 09:41:10.789010393 +0000 UTC Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.919635 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad68d98-2014-4595-bfa1-3c2d01ea9c1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ee0d4bdb8f37f7bbb7a130615b9df7a612fbcc74aadfb5eeb319935d50616d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da94f34ed139edbb39d3e777c72192e4d6d444b191e109b4c72ccb945513a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe6f4f8a090678c9eb76141cc1896ce9c032c026e768c884350cb64dace63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a3d5b07a894ad5cb3895f0d7890997de76f76e5dd6e4cf0d09096420e17a2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:18Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.932165 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ktf4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"106da5aa-5f2e-4d32-b172-4844ad6de7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f51a4a2b3b1ac421d5524a3776981e28eea2f2ae9d1e78977c96cc2f0faa894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f51a4a2b3b1ac421d5524a3776981e28eea2f2ae9d1e78977c96cc2f0faa894\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T00:16:17Z\\\",\\\"message\\\":\\\"2026-02-03T00:15:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_875157a4-3182-4f2f-8108-7a7fc380944d\\\\n2026-02-03T00:15:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_875157a4-3182-4f2f-8108-7a7fc380944d to /host/opt/cni/bin/\\\\n2026-02-03T00:15:32Z [verbose] multus-daemon started\\\\n2026-02-03T00:15:32Z [verbose] Readiness Indicator file check\\\\n2026-02-03T00:16:17Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4dpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ktf4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:18Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.942682 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j6wfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db2cbb1b-24c0-405a-ae34-2c1b4af6f999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1b3ce70bb8db498a796258ba1561e3e7fc260a82a2649e2e1a3e4132bcfe4d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89dk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd3875815c505e550007aa406ff23c56ac34176838363e4f894202ad69882389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89dk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j6wfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:18Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.951066 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hzk9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"039e204d-4d36-471e-990f-4eb5b4a193fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4pnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4pnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hzk9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:18Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.962942 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:18Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.964591 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.964627 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.964640 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.964676 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.964692 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:18Z","lastTransitionTime":"2026-02-03T00:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.976448 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9f9dfba8a37fa4a39f59f8b4a7cdc9c8493d91a2aef18275fae7d7989f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:18Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.986636 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:18Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:18 crc kubenswrapper[4798]: I0203 00:16:18.997044 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8mqs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a80722df-c977-49e0-b1ec-a83fea1c4f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6c76bd68818c8e42c934e6a93bcffb93289d3993a975f8490ce5dc2d7db601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwh9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8mqs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:18Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.014274 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d9827e9-9254-47d3-8104-063cecf114b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6167533a6a616c137b890a0878d4ca54de45315ddf8799777a78b065f287f8f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5aaaf6e417770868b0d07795a8dce7a762fff69be8f67fa8b18f2d725b3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://174155cc26bc82aa227742b9171a1bdc7e162fc52f673214b383d6fe29d6171e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78de22de18e933207990153b342930848b80ccb2a9f4111a17e195a988d4b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5985e1d80e9ba103ce9794b795647710c64f0bdf9f6b6c7ca7e3b1cc4bd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:19Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.029958 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84295e9a-e49f-4550-8b66-c13c1d9a31ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e69941f257b4b15956eceee179fe4480d582215ae076f2757f5ead357c5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a58ca61528f9b3bf28c062c16d1b924e68f3a7bc60438abbb9e6d726023b713b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75e6cbd2a5c8ecdfeb505388b2c00877ab436a5434a9ffb2c15fd1a6cbde36b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b848412f3f729b0be1246c7aa6d61d8a64e93fdbdaa5899e447dd24adb39f185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b416e7549939712a710931ff8eeed3b360c4a52f48ca3e003d48be6a9dde5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:19Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.044075 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bb16c3599d6149d6981f97d083ee866ab1f157a08b1ba9825442920b5af8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:19Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.060337 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6602c86-f236-4772-b70f-a8b4847b95dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363ed58e5d7ec642280fa4eeb3b84af899499279e5cca2f44a3762721b45758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a062edfbf637aa7eed0473b03b10011ed3e0b8647ca701c9b67bd16abe6fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b842j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:19Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.066432 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.066483 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.066499 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.066520 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.066536 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:19Z","lastTransitionTime":"2026-02-03T00:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.074015 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60953d56-8dc2-4adf-96bc-078a558476e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8504c36033083a61ef0f41fbba999a505d9d387e5c23264a35125fd50e69bf2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nhpkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:19Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.092762 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71790a2-e390-400a-a288-2a3af8467047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a113a5b1daade558722136f1090129bdafac6b1d135b7d3e911ed1105b9eb37c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a113a5b1daade558722136f1090129bdafac6b1d135b7d3e911ed1105b9eb37c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T00:15:58Z\\\",\\\"message\\\":\\\"[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0203 00:15:58.026095 6420 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0203 00:15:58.026434 6420 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 00:15:58.026489 6420 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gzlj4_openshift-ovn-kubernetes(b71790a2-e390-400a-a288-2a3af8467047)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gzlj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:19Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.104122 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2e011-9d28-486c-8271-62832c2c6324\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bebae4dacf97b2b134bfc60501e61d43918c37ae414740771908db0afe0195ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b3142e9893f22bf03ac0f2935798c4cfbfadea4911f655023634b4cf61a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6400ae64d5cc6c1f49d285bf4274e92e3905ce52d43e3938f6c99e583f90df21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ece09a4762670459423ed39e563099723cdbb52f78b4edaae6f80ec5eb33624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ece09a4762670459423ed39e563099723cdbb52f78b4edaae6f80ec5eb33624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:19Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.115013 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:19Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.129813 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da2defc8ae8b05759890f95063d7f6820ae261639188fbf19290f98cfcc080de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5cc605f6582cf6832be9a756695825774f5ce389556957d9769286d0e3640f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:19Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.142880 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4nx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f968882a-8e40-4760-b1fe-2d456390d30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a047e7a57cedd76d9c1585aa1a50f66fc1ace45b59f9d32e284d44c940a79f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swvtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4nx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:19Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.168922 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.168966 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.168976 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.169010 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.169020 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:19Z","lastTransitionTime":"2026-02-03T00:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.270982 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.271041 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.271061 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.271085 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.271103 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:19Z","lastTransitionTime":"2026-02-03T00:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.374036 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.374113 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.374132 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.374584 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.374636 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:19Z","lastTransitionTime":"2026-02-03T00:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.377892 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ktf4c_106da5aa-5f2e-4d32-b172-4844ad6de7f6/kube-multus/0.log" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.377956 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ktf4c" event={"ID":"106da5aa-5f2e-4d32-b172-4844ad6de7f6","Type":"ContainerStarted","Data":"f6c6b9ea2222959f244be8e386750c0d7bdd1a2f340aca554cc8b990c3907ba7"} Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.393669 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:19Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.405846 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9f9dfba8a37fa4a39f59f8b4a7cdc9c8493d91a2aef18275fae7d7989f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:19Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.418873 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:19Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.439856 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d9827e9-9254-47d3-8104-063cecf114b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6167533a6a616c137b890a0878d4ca54de45315ddf8799777a78b065f287f8f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5aaaf6e417770868b0d07795a8dce7a762fff69be8f67fa8b18f2d725b3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://174155cc26bc82aa227742b9171a1bdc7e162fc52f673214b383d6fe29d6171e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78de22de18e933207990153b342930848b80ccb2a9f4111a17e195a988d4b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5985e1d80e9ba103ce9794b795647710c64f0bdf9f6b6c7ca7e3b1cc4bd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:19Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.453336 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84295e9a-e49f-4550-8b66-c13c1d9a31ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e69941f257b4b15956eceee179fe4480d582215ae076f2757f5ead357c5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a58ca61528f9b3bf28c062c16d1b924e68f3a7bc60438abbb9e6d726023b713b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75e6cbd2a5c8ecdfeb505388b2c00877ab436a5434a9ffb2c15fd1a6cbde36b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b848412f3f729b0be1246c7aa6d61d8a64e93fdbdaa5899e447dd24adb39f185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b416e7549939712a710931ff8eeed3b360c4a52f48ca3e003d48be6a9dde5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:19Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.465131 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bb16c3599d6149d6981f97d083ee866ab1f157a08b1ba9825442920b5af8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:19Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.473630 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8mqs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a80722df-c977-49e0-b1ec-a83fea1c4f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6c76bd68818c8e42c934e6a93bcffb93289d3993a975f8490ce5dc2d7db601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwh9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8mqs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:19Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.477219 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.477246 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.477256 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.477270 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.477279 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:19Z","lastTransitionTime":"2026-02-03T00:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.485216 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2e011-9d28-486c-8271-62832c2c6324\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bebae4dacf97b2b134bfc60501e61d43918c37ae414740771908db0afe0195ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b3142e9893f22bf03ac0f2935798c4cfbfadea4911f655023634b4cf61a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6400ae64d5cc6c1f49d285bf4274e92e3905ce52d43e3938f6c99e583f90df21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ece09a4762670459423ed39e563099723cdbb52f78b4edaae6f80ec5eb33624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ece09a4762670459423ed39e563099723cdbb52f78b4edaae6f80ec5eb33624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:19Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.499149 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:19Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.513325 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da2defc8ae8b05759890f95063d7f6820ae261639188fbf19290f98cfcc080de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5cc605f6582cf6832be9a756695825774f5ce389556957d9769286d0e3640f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:19Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.523421 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4nx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f968882a-8e40-4760-b1fe-2d456390d30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a047e7a57cedd76d9c1585aa1a50f66fc1ace45b59f9d32e284d44c940a79f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swvtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4nx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:19Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.534484 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6602c86-f236-4772-b70f-a8b4847b95dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363ed58e5d7ec642280fa4eeb3b84af899499279e5cca2f44a3762721b45758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a062edfbf637aa7eed0473b03b10011ed3e0b8647ca701c9b67bd16abe6fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b842j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:19Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.547976 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60953d56-8dc2-4adf-96bc-078a558476e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8504c36033083a61ef0f41fbba999a505d9d387e5c23264a35125fd50e69bf2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nhpkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:19Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.566130 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71790a2-e390-400a-a288-2a3af8467047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a113a5b1daade558722136f1090129bdafac6b1d135b7d3e911ed1105b9eb37c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a113a5b1daade558722136f1090129bdafac6b1d135b7d3e911ed1105b9eb37c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T00:15:58Z\\\",\\\"message\\\":\\\"[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0203 00:15:58.026095 6420 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0203 00:15:58.026434 6420 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 00:15:58.026489 6420 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gzlj4_openshift-ovn-kubernetes(b71790a2-e390-400a-a288-2a3af8467047)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gzlj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:19Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.578735 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad68d98-2014-4595-bfa1-3c2d01ea9c1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ee0d4bdb8f37f7bbb7a130615b9df7a612fbcc74aadfb5eeb319935d50616d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da94f34ed139edbb39d3e777c72192e4d6d444b191e109b4c72ccb945513a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe6f4f8a090678c9eb76141cc1896ce9c032c026e768c884350cb64dace63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a3d5b07a894ad5cb3895f0d7890997de76f76e5dd6e4cf0d09096420e17a2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:19Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.579525 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.579558 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.579570 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.579584 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.579595 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:19Z","lastTransitionTime":"2026-02-03T00:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.590515 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ktf4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"106da5aa-5f2e-4d32-b172-4844ad6de7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6c6b9ea2222959f244be8e386750c0d7bdd1a2f340aca554cc8b990c3907ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f51a4a2b3b1ac421d5524a3776981e28eea2f2ae9d1e78977c96cc2f0faa894\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T00:16:17Z\\\",\\\"message\\\":\\\"2026-02-03T00:15:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_875157a4-3182-4f2f-8108-7a7fc380944d\\\\n2026-02-03T00:15:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_875157a4-3182-4f2f-8108-7a7fc380944d to /host/opt/cni/bin/\\\\n2026-02-03T00:15:32Z [verbose] multus-daemon started\\\\n2026-02-03T00:15:32Z [verbose] Readiness Indicator file check\\\\n2026-02-03T00:16:17Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:16:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4dpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ktf4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:19Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.603118 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j6wfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db2cbb1b-24c0-405a-ae34-2c1b4af6f999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1b3ce70bb8db498a796258ba1561e3e7fc260a82a2649e2e1a3e4132bcfe4d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89dk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd3875815c505e550007aa406ff23c56ac34176838363e4f894202ad69882389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89dk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j6wfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:19Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.613431 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hzk9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"039e204d-4d36-471e-990f-4eb5b4a193fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4pnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4pnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hzk9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:19Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.681786 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.681820 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.681828 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.681840 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.681851 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:19Z","lastTransitionTime":"2026-02-03T00:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.784259 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.784290 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.784301 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.784314 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.784325 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:19Z","lastTransitionTime":"2026-02-03T00:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.886846 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.886879 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.886890 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.886904 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.886912 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:19Z","lastTransitionTime":"2026-02-03T00:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.907259 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.907289 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.907296 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:16:19 crc kubenswrapper[4798]: E0203 00:16:19.907526 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:16:19 crc kubenswrapper[4798]: E0203 00:16:19.907577 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:16:19 crc kubenswrapper[4798]: E0203 00:16:19.907394 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.910320 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 14:31:37.488967471 +0000 UTC Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.990305 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.990374 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.990397 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.990426 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:19 crc kubenswrapper[4798]: I0203 00:16:19.990446 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:19Z","lastTransitionTime":"2026-02-03T00:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.092399 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.092437 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.092448 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.092463 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.092473 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:20Z","lastTransitionTime":"2026-02-03T00:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.194627 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.194694 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.194706 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.194723 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.194737 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:20Z","lastTransitionTime":"2026-02-03T00:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.297616 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.297679 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.297692 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.297708 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.297719 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:20Z","lastTransitionTime":"2026-02-03T00:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.399560 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.399800 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.399862 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.399920 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.399980 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:20Z","lastTransitionTime":"2026-02-03T00:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.502908 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.502955 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.502966 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.502982 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.502994 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:20Z","lastTransitionTime":"2026-02-03T00:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.605965 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.605998 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.606007 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.606019 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.606028 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:20Z","lastTransitionTime":"2026-02-03T00:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.708389 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.708428 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.708439 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.708454 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.708465 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:20Z","lastTransitionTime":"2026-02-03T00:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.810789 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.811025 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.811125 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.811249 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.811339 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:20Z","lastTransitionTime":"2026-02-03T00:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.908022 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:16:20 crc kubenswrapper[4798]: E0203 00:16:20.908366 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.910408 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 05:00:06.857524585 +0000 UTC Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.912810 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.912836 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.912845 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.912857 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:20 crc kubenswrapper[4798]: I0203 00:16:20.912867 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:20Z","lastTransitionTime":"2026-02-03T00:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.014502 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.014531 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.014540 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.014559 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.014567 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:21Z","lastTransitionTime":"2026-02-03T00:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.117567 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.117622 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.117633 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.117650 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.117687 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:21Z","lastTransitionTime":"2026-02-03T00:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.220143 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.220179 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.220187 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.220201 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.220214 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:21Z","lastTransitionTime":"2026-02-03T00:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.322445 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.322480 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.322491 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.322505 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.322514 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:21Z","lastTransitionTime":"2026-02-03T00:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.424638 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.424862 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.424954 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.425062 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.425177 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:21Z","lastTransitionTime":"2026-02-03T00:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.527609 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.527640 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.527667 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.527683 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.527693 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:21Z","lastTransitionTime":"2026-02-03T00:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.629999 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.630046 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.630057 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.630075 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.630086 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:21Z","lastTransitionTime":"2026-02-03T00:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.732840 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.732884 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.732896 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.732914 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.732928 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:21Z","lastTransitionTime":"2026-02-03T00:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.835762 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.835802 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.835815 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.835830 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.835842 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:21Z","lastTransitionTime":"2026-02-03T00:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.907889 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:16:21 crc kubenswrapper[4798]: E0203 00:16:21.908004 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.908166 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:16:21 crc kubenswrapper[4798]: E0203 00:16:21.908233 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.908378 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:16:21 crc kubenswrapper[4798]: E0203 00:16:21.908613 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.911500 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 19:57:33.470476477 +0000 UTC Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.938225 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.938252 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.938261 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.938272 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:21 crc kubenswrapper[4798]: I0203 00:16:21.938281 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:21Z","lastTransitionTime":"2026-02-03T00:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.040695 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.040741 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.040752 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.040767 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.040777 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:22Z","lastTransitionTime":"2026-02-03T00:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.143474 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.143510 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.143519 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.143532 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.143541 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:22Z","lastTransitionTime":"2026-02-03T00:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.246305 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.246341 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.246351 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.246365 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.246375 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:22Z","lastTransitionTime":"2026-02-03T00:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.348613 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.348937 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.349040 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.349130 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.349210 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:22Z","lastTransitionTime":"2026-02-03T00:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.451810 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.451879 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.451893 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.451910 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.451919 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:22Z","lastTransitionTime":"2026-02-03T00:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.554142 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.554187 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.554196 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.554210 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.554219 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:22Z","lastTransitionTime":"2026-02-03T00:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.656373 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.656410 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.656421 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.656436 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.656447 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:22Z","lastTransitionTime":"2026-02-03T00:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.759129 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.759191 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.759214 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.759236 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.759255 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:22Z","lastTransitionTime":"2026-02-03T00:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.862157 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.862228 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.862250 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.862282 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.862304 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:22Z","lastTransitionTime":"2026-02-03T00:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.907842 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:16:22 crc kubenswrapper[4798]: E0203 00:16:22.907975 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.911590 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 01:17:08.211777627 +0000 UTC Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.964334 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.964602 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.964618 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.964636 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:22 crc kubenswrapper[4798]: I0203 00:16:22.964671 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:22Z","lastTransitionTime":"2026-02-03T00:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.067392 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.067438 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.067447 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.067460 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.067468 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:23Z","lastTransitionTime":"2026-02-03T00:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.169391 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.169438 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.169446 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.169460 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.169469 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:23Z","lastTransitionTime":"2026-02-03T00:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.272136 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.272173 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.272182 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.272196 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.272205 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:23Z","lastTransitionTime":"2026-02-03T00:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.374465 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.374503 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.374515 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.374528 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.374540 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:23Z","lastTransitionTime":"2026-02-03T00:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.477157 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.477206 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.477217 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.477232 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.477241 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:23Z","lastTransitionTime":"2026-02-03T00:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.579920 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.579965 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.579974 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.579988 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.579998 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:23Z","lastTransitionTime":"2026-02-03T00:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.682326 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.682365 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.682373 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.682387 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.682396 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:23Z","lastTransitionTime":"2026-02-03T00:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.785087 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.785129 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.785139 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.785155 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.785167 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:23Z","lastTransitionTime":"2026-02-03T00:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.890261 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.890314 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.890328 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.890348 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.890363 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:23Z","lastTransitionTime":"2026-02-03T00:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.907930 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.908003 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:16:23 crc kubenswrapper[4798]: E0203 00:16:23.908083 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.908003 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:16:23 crc kubenswrapper[4798]: E0203 00:16:23.908208 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:16:23 crc kubenswrapper[4798]: E0203 00:16:23.908249 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.912022 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 01:50:17.914026748 +0000 UTC Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.993275 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.993338 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.993352 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.993369 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:23 crc kubenswrapper[4798]: I0203 00:16:23.993382 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:23Z","lastTransitionTime":"2026-02-03T00:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.096057 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.096112 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.096124 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.096140 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.096152 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:24Z","lastTransitionTime":"2026-02-03T00:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.198398 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.198439 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.198449 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.198464 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.198473 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:24Z","lastTransitionTime":"2026-02-03T00:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.302326 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.302390 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.302407 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.302428 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.302447 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:24Z","lastTransitionTime":"2026-02-03T00:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.405040 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.405101 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.405119 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.405141 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.405157 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:24Z","lastTransitionTime":"2026-02-03T00:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.508516 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.508590 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.508615 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.508642 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.508696 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:24Z","lastTransitionTime":"2026-02-03T00:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.611606 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.611748 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.611774 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.611802 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.611821 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:24Z","lastTransitionTime":"2026-02-03T00:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.715439 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.715505 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.715517 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.715540 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.715555 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:24Z","lastTransitionTime":"2026-02-03T00:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.818038 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.818080 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.818092 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.818114 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.818133 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:24Z","lastTransitionTime":"2026-02-03T00:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.907848 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:16:24 crc kubenswrapper[4798]: E0203 00:16:24.907992 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.912325 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 16:21:28.274612655 +0000 UTC Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.921864 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.922162 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.922246 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.922342 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:24 crc kubenswrapper[4798]: I0203 00:16:24.922425 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:24Z","lastTransitionTime":"2026-02-03T00:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.025326 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.025544 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.025621 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.025777 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.025879 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:25Z","lastTransitionTime":"2026-02-03T00:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.127898 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.127953 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.127963 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.127980 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.127990 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:25Z","lastTransitionTime":"2026-02-03T00:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.230462 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.230517 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.230531 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.230550 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.230562 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:25Z","lastTransitionTime":"2026-02-03T00:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.332841 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.332881 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.332893 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.332911 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.332925 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:25Z","lastTransitionTime":"2026-02-03T00:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.435933 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.435974 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.435985 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.436002 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.436013 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:25Z","lastTransitionTime":"2026-02-03T00:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.538883 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.538936 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.538949 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.538967 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.538979 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:25Z","lastTransitionTime":"2026-02-03T00:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.641587 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.641676 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.641691 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.641707 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.641767 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:25Z","lastTransitionTime":"2026-02-03T00:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.744565 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.744609 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.744618 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.744632 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.744643 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:25Z","lastTransitionTime":"2026-02-03T00:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.786028 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.786079 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.786087 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.786102 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.786111 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:25Z","lastTransitionTime":"2026-02-03T00:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:25 crc kubenswrapper[4798]: E0203 00:16:25.801220 4798 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a6f95815-93e8-4b6e-91cb-e8db1e0a2c7b\\\",\\\"systemUUID\\\":\\\"d689d10a-78fe-472b-864b-496c283a966b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:25Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.808035 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.808104 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.808125 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.808150 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.808172 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:25Z","lastTransitionTime":"2026-02-03T00:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:25 crc kubenswrapper[4798]: E0203 00:16:25.823413 4798 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a6f95815-93e8-4b6e-91cb-e8db1e0a2c7b\\\",\\\"systemUUID\\\":\\\"d689d10a-78fe-472b-864b-496c283a966b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:25Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.826982 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.827035 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.827051 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.827069 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.827082 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:25Z","lastTransitionTime":"2026-02-03T00:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:25 crc kubenswrapper[4798]: E0203 00:16:25.839425 4798 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a6f95815-93e8-4b6e-91cb-e8db1e0a2c7b\\\",\\\"systemUUID\\\":\\\"d689d10a-78fe-472b-864b-496c283a966b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:25Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.843867 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.843935 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.843953 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.843978 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.843996 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:25Z","lastTransitionTime":"2026-02-03T00:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:25 crc kubenswrapper[4798]: E0203 00:16:25.855872 4798 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a6f95815-93e8-4b6e-91cb-e8db1e0a2c7b\\\",\\\"systemUUID\\\":\\\"d689d10a-78fe-472b-864b-496c283a966b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:25Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.859080 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.859120 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.859132 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.859148 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.859160 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:25Z","lastTransitionTime":"2026-02-03T00:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:25 crc kubenswrapper[4798]: E0203 00:16:25.870551 4798 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T00:16:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a6f95815-93e8-4b6e-91cb-e8db1e0a2c7b\\\",\\\"systemUUID\\\":\\\"d689d10a-78fe-472b-864b-496c283a966b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:25Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:25 crc kubenswrapper[4798]: E0203 00:16:25.870685 4798 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.872336 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.872359 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.872369 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.872384 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.872396 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:25Z","lastTransitionTime":"2026-02-03T00:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.907704 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:16:25 crc kubenswrapper[4798]: E0203 00:16:25.908158 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.908431 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:16:25 crc kubenswrapper[4798]: E0203 00:16:25.908535 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.908745 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:16:25 crc kubenswrapper[4798]: E0203 00:16:25.908825 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.913202 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 21:14:26.799430032 +0000 UTC Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.973840 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.973868 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.973876 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.973887 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:25 crc kubenswrapper[4798]: I0203 00:16:25.973896 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:25Z","lastTransitionTime":"2026-02-03T00:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.076298 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.076336 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.076344 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.076359 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.076368 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:26Z","lastTransitionTime":"2026-02-03T00:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.178958 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.179011 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.179027 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.179048 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.179065 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:26Z","lastTransitionTime":"2026-02-03T00:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.281569 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.281606 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.281616 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.281630 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.281641 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:26Z","lastTransitionTime":"2026-02-03T00:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.383401 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.383439 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.383451 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.383480 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.383490 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:26Z","lastTransitionTime":"2026-02-03T00:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.487014 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.487097 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.487126 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.487158 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.487183 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:26Z","lastTransitionTime":"2026-02-03T00:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.590030 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.590100 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.590122 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.590150 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.590172 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:26Z","lastTransitionTime":"2026-02-03T00:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.694012 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.694071 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.694085 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.694100 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.694110 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:26Z","lastTransitionTime":"2026-02-03T00:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.796401 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.796433 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.796442 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.796454 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.796463 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:26Z","lastTransitionTime":"2026-02-03T00:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.900343 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.900392 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.900405 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.900426 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.900440 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:26Z","lastTransitionTime":"2026-02-03T00:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.907881 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:16:26 crc kubenswrapper[4798]: E0203 00:16:26.908351 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.908490 4798 scope.go:117] "RemoveContainer" containerID="a113a5b1daade558722136f1090129bdafac6b1d135b7d3e911ed1105b9eb37c" Feb 03 00:16:26 crc kubenswrapper[4798]: I0203 00:16:26.913432 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 17:07:25.33704402 +0000 UTC Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.004054 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.004285 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.004475 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.004696 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.004877 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:27Z","lastTransitionTime":"2026-02-03T00:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.108145 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.108418 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.108630 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.108804 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.108927 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:27Z","lastTransitionTime":"2026-02-03T00:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.213287 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.213336 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.213346 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.213365 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.213430 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:27Z","lastTransitionTime":"2026-02-03T00:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.316054 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.316092 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.316105 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.316120 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.316132 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:27Z","lastTransitionTime":"2026-02-03T00:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.408895 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gzlj4_b71790a2-e390-400a-a288-2a3af8467047/ovnkube-controller/2.log" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.413248 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" event={"ID":"b71790a2-e390-400a-a288-2a3af8467047","Type":"ContainerStarted","Data":"543ce215587d0745d59fb1c3a87d4fd86d500257ac93b2b61d7cec602dd61102"} Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.414186 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.419254 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.419471 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.419605 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.419764 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.419893 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:27Z","lastTransitionTime":"2026-02-03T00:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.434688 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da2defc8ae8b05759890f95063d7f6820ae261639188fbf19290f98cfcc080de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5cc605f6582cf6832be9a756695825774f5ce389556957d9769286d0e3640f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:27Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.452565 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4nx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f968882a-8e40-4760-b1fe-2d456390d30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a047e7a57cedd76d9c1585aa1a50f66fc1ace45b59f9d32e284d44c940a79f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swvtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4nx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:27Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.465759 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6602c86-f236-4772-b70f-a8b4847b95dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363ed58e5d7ec642280fa4eeb3b84af899499279e5cca2f44a3762721b45758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a062edfbf637aa7eed0473b03b10011ed3e0b8647ca701c9b67bd16abe6fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b842j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:27Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.482836 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60953d56-8dc2-4adf-96bc-078a558476e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8504c36033083a61ef0f41fbba999a505d9d387e5c23264a35125fd50e69bf2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nhpkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:27Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.505742 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71790a2-e390-400a-a288-2a3af8467047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://543ce215587d0745d59fb1c3a87d4fd86d500257ac93b2b61d7cec602dd61102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a113a5b1daade558722136f1090129bdafac6b1d135b7d3e911ed1105b9eb37c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T00:15:58Z\\\",\\\"message\\\":\\\"[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0203 00:15:58.026095 6420 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0203 00:15:58.026434 6420 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 00:15:58.026489 6420 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gzlj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:27Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.520291 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2e011-9d28-486c-8271-62832c2c6324\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bebae4dacf97b2b134bfc60501e61d43918c37ae414740771908db0afe0195ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b3142e9893f22bf03ac0f2935798c4cfbfadea4911f655023634b4cf61a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6400ae64d5cc6c1f49d285bf4274e92e3905ce52d43e3938f6c99e583f90df21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ece09a4762670459423ed39e563099723cdbb52f78b4edaae6f80ec5eb33624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ece09a4762670459423ed39e563099723cdbb52f78b4edaae6f80ec5eb33624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:27Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.522518 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.522554 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.522563 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.522577 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.522589 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:27Z","lastTransitionTime":"2026-02-03T00:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.539164 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:27Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.555341 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j6wfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db2cbb1b-24c0-405a-ae34-2c1b4af6f999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1b3ce70bb8db498a796258ba1561e3e7fc260a82a2649e2e1a3e4132bcfe4d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89dk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd3875815c505e550007aa406ff23c56ac34176838363e4f894202ad69882389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89dk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j6wfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:27Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.569515 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hzk9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"039e204d-4d36-471e-990f-4eb5b4a193fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4pnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4pnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hzk9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:27Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.589256 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad68d98-2014-4595-bfa1-3c2d01ea9c1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ee0d4bdb8f37f7bbb7a130615b9df7a612fbcc74aadfb5eeb319935d50616d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da94f34ed139edbb39d3e777c72192e4d6d444b191e109b4c72ccb945513a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe6f4f8a090678c9eb76141cc1896ce9c032c026e768c884350cb64dace63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a3d5b07a894ad5cb3895f0d7890997de76f76e5dd6e4cf0d09096420e17a2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:27Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.610911 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ktf4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"106da5aa-5f2e-4d32-b172-4844ad6de7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6c6b9ea2222959f244be8e386750c0d7bdd1a2f340aca554cc8b990c3907ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f51a4a2b3b1ac421d5524a3776981e28eea2f2ae9d1e78977c96cc2f0faa894\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T00:16:17Z\\\",\\\"message\\\":\\\"2026-02-03T00:15:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_875157a4-3182-4f2f-8108-7a7fc380944d\\\\n2026-02-03T00:15:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_875157a4-3182-4f2f-8108-7a7fc380944d to /host/opt/cni/bin/\\\\n2026-02-03T00:15:32Z [verbose] multus-daemon started\\\\n2026-02-03T00:15:32Z [verbose] Readiness Indicator file check\\\\n2026-02-03T00:16:17Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:16:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4dpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ktf4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:27Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.625698 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.625743 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.625754 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.625770 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.625782 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:27Z","lastTransitionTime":"2026-02-03T00:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.630893 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:27Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.653319 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:27Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.674205 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9f9dfba8a37fa4a39f59f8b4a7cdc9c8493d91a2aef18275fae7d7989f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:27Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.697840 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bb16c3599d6149d6981f97d083ee866ab1f157a08b1ba9825442920b5af8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:27Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.716128 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8mqs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a80722df-c977-49e0-b1ec-a83fea1c4f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6c76bd68818c8e42c934e6a93bcffb93289d3993a975f8490ce5dc2d7db601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwh9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8mqs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:27Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.728451 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.728495 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.728504 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.728518 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.728528 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:27Z","lastTransitionTime":"2026-02-03T00:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.752528 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d9827e9-9254-47d3-8104-063cecf114b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6167533a6a616c137b890a0878d4ca54de45315ddf8799777a78b065f287f8f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5aaaf6e417770868b0d07795a8dce7a762fff69be8f67fa8b18f2d725b3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://174155cc26bc82aa227742b9171a1bdc7e162fc52f673214b383d6fe29d6171e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78de22de18e933207990153b342930848b80ccb2a9f4111a17e195a988d4b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5985e1d80e9ba103ce9794b795647710c64f0bdf9f6b6c7ca7e3b1cc4bd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:27Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.769821 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84295e9a-e49f-4550-8b66-c13c1d9a31ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e69941f257b4b15956eceee179fe4480d582215ae076f2757f5ead357c5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a58ca61528f9b3bf28c062c16d1b924e68f3a7bc60438abbb9e6d726023b713b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75e6cbd2a5c8ecdfeb505388b2c00877ab436a5434a9ffb2c15fd1a6cbde36b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b848412f3f729b0be1246c7aa6d61d8a64e93fdbdaa5899e447dd24adb39f185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b416e7549939712a710931ff8eeed3b360c4a52f48ca3e003d48be6a9dde5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:27Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.831343 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.831394 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.831404 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.831417 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.831442 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:27Z","lastTransitionTime":"2026-02-03T00:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.907446 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.907510 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.907528 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:16:27 crc kubenswrapper[4798]: E0203 00:16:27.907633 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:16:27 crc kubenswrapper[4798]: E0203 00:16:27.907785 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:16:27 crc kubenswrapper[4798]: E0203 00:16:27.907875 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.914469 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 19:33:51.573444249 +0000 UTC Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.933067 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.933109 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.933122 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.933139 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:27 crc kubenswrapper[4798]: I0203 00:16:27.933152 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:27Z","lastTransitionTime":"2026-02-03T00:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.035091 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.035161 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.035175 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.035190 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.035201 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:28Z","lastTransitionTime":"2026-02-03T00:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.137839 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.137882 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.137895 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.137911 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.137925 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:28Z","lastTransitionTime":"2026-02-03T00:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.240892 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.240951 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.240959 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.240972 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.240982 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:28Z","lastTransitionTime":"2026-02-03T00:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.343789 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.343825 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.343834 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.343847 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.343859 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:28Z","lastTransitionTime":"2026-02-03T00:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.419875 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gzlj4_b71790a2-e390-400a-a288-2a3af8467047/ovnkube-controller/3.log" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.420917 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gzlj4_b71790a2-e390-400a-a288-2a3af8467047/ovnkube-controller/2.log" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.425690 4798 generic.go:334] "Generic (PLEG): container finished" podID="b71790a2-e390-400a-a288-2a3af8467047" containerID="543ce215587d0745d59fb1c3a87d4fd86d500257ac93b2b61d7cec602dd61102" exitCode=1 Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.425740 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" event={"ID":"b71790a2-e390-400a-a288-2a3af8467047","Type":"ContainerDied","Data":"543ce215587d0745d59fb1c3a87d4fd86d500257ac93b2b61d7cec602dd61102"} Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.425784 4798 scope.go:117] "RemoveContainer" containerID="a113a5b1daade558722136f1090129bdafac6b1d135b7d3e911ed1105b9eb37c" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.426972 4798 scope.go:117] "RemoveContainer" containerID="543ce215587d0745d59fb1c3a87d4fd86d500257ac93b2b61d7cec602dd61102" Feb 03 00:16:28 crc kubenswrapper[4798]: E0203 00:16:28.427277 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gzlj4_openshift-ovn-kubernetes(b71790a2-e390-400a-a288-2a3af8467047)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" podUID="b71790a2-e390-400a-a288-2a3af8467047" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.446815 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.446889 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.446906 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.446929 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.446946 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:28Z","lastTransitionTime":"2026-02-03T00:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.447853 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ktf4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"106da5aa-5f2e-4d32-b172-4844ad6de7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6c6b9ea2222959f244be8e386750c0d7bdd1a2f340aca554cc8b990c3907ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f51a4a2b3b1ac421d5524a3776981e28eea2f2ae9d1e78977c96cc2f0faa894\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T00:16:17Z\\\",\\\"message\\\":\\\"2026-02-03T00:15:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_875157a4-3182-4f2f-8108-7a7fc380944d\\\\n2026-02-03T00:15:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_875157a4-3182-4f2f-8108-7a7fc380944d to /host/opt/cni/bin/\\\\n2026-02-03T00:15:32Z [verbose] multus-daemon started\\\\n2026-02-03T00:15:32Z [verbose] Readiness Indicator file check\\\\n2026-02-03T00:16:17Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:16:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4dpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ktf4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:28Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.468810 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j6wfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db2cbb1b-24c0-405a-ae34-2c1b4af6f999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1b3ce70bb8db498a796258ba1561e3e7fc260a82a2649e2e1a3e4132bcfe4d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89dk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd3875815c505e550007aa406ff23c56ac34176838363e4f894202ad69882389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89dk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j6wfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:28Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.484207 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hzk9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"039e204d-4d36-471e-990f-4eb5b4a193fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4pnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4pnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hzk9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:28Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.495740 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad68d98-2014-4595-bfa1-3c2d01ea9c1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ee0d4bdb8f37f7bbb7a130615b9df7a612fbcc74aadfb5eeb319935d50616d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da94f34ed139edbb39d3e777c72192e4d6d444b191e109b4c72ccb945513a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe6f4f8a090678c9eb76141cc1896ce9c032c026e768c884350cb64dace63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a3d5b07a894ad5cb3895f0d7890997de76f76e5dd6e4cf0d09096420e17a2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:28Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.508331 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9f9dfba8a37fa4a39f59f8b4a7cdc9c8493d91a2aef18275fae7d7989f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:28Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.527438 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:28Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.543495 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:28Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.549177 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.549213 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.549225 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.549240 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.549251 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:28Z","lastTransitionTime":"2026-02-03T00:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.558240 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84295e9a-e49f-4550-8b66-c13c1d9a31ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e69941f257b4b15956eceee179fe4480d582215ae076f2757f5ead357c5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a58ca61528f9b3bf28c062c16d1b924e68f3a7bc60438abbb9e6d726023b713b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75e6cbd2a5c8ecdfeb505388b2c00877ab436a5434a9ffb2c15fd1a6cbde36b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b848412f3f729b0be1246c7aa6d61d8a64e93fdbdaa5899e447dd24adb39f185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b416e7549939712a710931ff8eeed3b360c4a52f48ca3e003d48be6a9dde5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:28Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.572767 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bb16c3599d6149d6981f97d083ee866ab1f157a08b1ba9825442920b5af8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:28Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.584880 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8mqs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a80722df-c977-49e0-b1ec-a83fea1c4f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6c76bd68818c8e42c934e6a93bcffb93289d3993a975f8490ce5dc2d7db601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwh9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8mqs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:28Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.605055 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d9827e9-9254-47d3-8104-063cecf114b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6167533a6a616c137b890a0878d4ca54de45315ddf8799777a78b065f287f8f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5aaaf6e417770868b0d07795a8dce7a762fff69be8f67fa8b18f2d725b3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://174155cc26bc82aa227742b9171a1bdc7e162fc52f673214b383d6fe29d6171e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78de22de18e933207990153b342930848b80ccb2a9f4111a17e195a988d4b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5985e1d80e9ba103ce9794b795647710c64f0bdf9f6b6c7ca7e3b1cc4bd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:28Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.620023 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:28Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.640184 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da2defc8ae8b05759890f95063d7f6820ae261639188fbf19290f98cfcc080de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5cc605f6582cf6832be9a756695825774f5ce389556957d9769286d0e3640f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:28Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.653836 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.653903 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.653920 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.653946 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.653963 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:28Z","lastTransitionTime":"2026-02-03T00:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.654955 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4nx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f968882a-8e40-4760-b1fe-2d456390d30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a047e7a57cedd76d9c1585aa1a50f66fc1ace45b59f9d32e284d44c940a79f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swvtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4nx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:28Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.673293 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6602c86-f236-4772-b70f-a8b4847b95dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363ed58e5d7ec642280fa4eeb3b84af899499279e5cca2f44a3762721b45758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a062edfbf637aa7eed0473b03b10011ed3e0b8647ca701c9b67bd16abe6fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b842j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:28Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.695848 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60953d56-8dc2-4adf-96bc-078a558476e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8504c36033083a61ef0f41fbba999a505d9d387e5c23264a35125fd50e69bf2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nhpkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:28Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.720074 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71790a2-e390-400a-a288-2a3af8467047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://543ce215587d0745d59fb1c3a87d4fd86d500257ac93b2b61d7cec602dd61102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a113a5b1daade558722136f1090129bdafac6b1d135b7d3e911ed1105b9eb37c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T00:15:58Z\\\",\\\"message\\\":\\\"[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0203 00:15:58.026095 6420 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0203 00:15:58.026434 6420 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 00:15:58.026489 6420 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://543ce215587d0745d59fb1c3a87d4fd86d500257ac93b2b61d7cec602dd61102\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T00:16:28Z\\\",\\\"message\\\":\\\"s/informers/externalversions/factory.go:140\\\\nI0203 00:16:28.140958 6814 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0203 00:16:28.141133 6814 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 00:16:28.141221 6814 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 00:16:28.141873 6814 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0203 00:16:28.141926 6814 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0203 00:16:28.141987 6814 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0203 00:16:28.142050 6814 factory.go:656] Stopping watch factory\\\\nI0203 00:16:28.142070 6814 handler.go:208] Removed *v1.Node event handler 2\\\\nI0203 00:16:28.164594 6814 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0203 00:16:28.164632 6814 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0203 00:16:28.164739 6814 ovnkube.go:599] Stopped ovnkube\\\\nI0203 00:16:28.164775 6814 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0203 00:16:28.164873 6814 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T00:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gzlj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:28Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.734432 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2e011-9d28-486c-8271-62832c2c6324\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bebae4dacf97b2b134bfc60501e61d43918c37ae414740771908db0afe0195ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b3142e9893f22bf03ac0f2935798c4cfbfadea4911f655023634b4cf61a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6400ae64d5cc6c1f49d285bf4274e92e3905ce52d43e3938f6c99e583f90df21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ece09a4762670459423ed39e563099723cdbb52f78b4edaae6f80ec5eb33624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ece09a4762670459423ed39e563099723cdbb52f78b4edaae6f80ec5eb33624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:28Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.756741 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.756852 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.756879 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.756911 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.756936 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:28Z","lastTransitionTime":"2026-02-03T00:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.859887 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.860406 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.860527 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.860614 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.860704 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:28Z","lastTransitionTime":"2026-02-03T00:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.908297 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:16:28 crc kubenswrapper[4798]: E0203 00:16:28.908460 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.914730 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 08:33:16.839444522 +0000 UTC Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.924292 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:28Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.940291 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9f9dfba8a37fa4a39f59f8b4a7cdc9c8493d91a2aef18275fae7d7989f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:28Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.958574 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:28Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.963716 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.963781 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.963794 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.963815 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.963834 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:28Z","lastTransitionTime":"2026-02-03T00:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:28 crc kubenswrapper[4798]: I0203 00:16:28.983481 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d9827e9-9254-47d3-8104-063cecf114b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6167533a6a616c137b890a0878d4ca54de45315ddf8799777a78b065f287f8f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5aaaf6e417770868b0d07795a8dce7a762fff69be8f67fa8b18f2d725b3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://174155cc26bc82aa227742b9171a1bdc7e162fc52f673214b383d6fe29d6171e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78de22de18e933207990153b342930848b80ccb2a9f4111a17e195a988d4b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5985e1d80e9ba103ce9794b795647710c64f0bdf9f6b6c7ca7e3b1cc4bd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:28Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.000594 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84295e9a-e49f-4550-8b66-c13c1d9a31ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e69941f257b4b15956eceee179fe4480d582215ae076f2757f5ead357c5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a58ca61528f9b3bf28c062c16d1b924e68f3a7bc60438abbb9e6d726023b713b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75e6cbd2a5c8ecdfeb505388b2c00877ab436a5434a9ffb2c15fd1a6cbde36b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b848412f3f729b0be1246c7aa6d61d8a64e93fdbdaa5899e447dd24adb39f185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b416e7549939712a710931ff8eeed3b360c4a52f48ca3e003d48be6a9dde5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:28Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.015562 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bb16c3599d6149d6981f97d083ee866ab1f157a08b1ba9825442920b5af8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.028278 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8mqs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a80722df-c977-49e0-b1ec-a83fea1c4f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6c76bd68818c8e42c934e6a93bcffb93289d3993a975f8490ce5dc2d7db601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwh9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8mqs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.052123 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71790a2-e390-400a-a288-2a3af8467047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://543ce215587d0745d59fb1c3a87d4fd86d500257ac93b2b61d7cec602dd61102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a113a5b1daade558722136f1090129bdafac6b1d135b7d3e911ed1105b9eb37c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T00:15:58Z\\\",\\\"message\\\":\\\"[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0203 00:15:58.026095 6420 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0203 00:15:58.026434 6420 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 00:15:58.026489 6420 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://543ce215587d0745d59fb1c3a87d4fd86d500257ac93b2b61d7cec602dd61102\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T00:16:28Z\\\",\\\"message\\\":\\\"s/informers/externalversions/factory.go:140\\\\nI0203 00:16:28.140958 6814 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0203 00:16:28.141133 6814 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 00:16:28.141221 6814 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 00:16:28.141873 6814 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0203 00:16:28.141926 6814 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0203 00:16:28.141987 6814 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0203 00:16:28.142050 6814 factory.go:656] Stopping watch factory\\\\nI0203 00:16:28.142070 6814 handler.go:208] Removed *v1.Node event handler 2\\\\nI0203 00:16:28.164594 6814 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0203 00:16:28.164632 6814 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0203 00:16:28.164739 6814 ovnkube.go:599] Stopped ovnkube\\\\nI0203 00:16:28.164775 6814 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0203 00:16:28.164873 6814 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T00:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gzlj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.065550 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2e011-9d28-486c-8271-62832c2c6324\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bebae4dacf97b2b134bfc60501e61d43918c37ae414740771908db0afe0195ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b3142e9893f22bf03ac0f2935798c4cfbfadea4911f655023634b4cf61a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6400ae64d5cc6c1f49d285bf4274e92e3905ce52d43e3938f6c99e583f90df21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ece09a4762670459423ed39e563099723cdbb52f78b4edaae6f80ec5eb33624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ece09a4762670459423ed39e563099723cdbb52f78b4edaae6f80ec5eb33624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.066110 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.066143 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.066155 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.066170 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.066180 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:29Z","lastTransitionTime":"2026-02-03T00:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.082093 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.098447 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da2defc8ae8b05759890f95063d7f6820ae261639188fbf19290f98cfcc080de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5cc605f6582cf6832be9a756695825774f5ce389556957d9769286d0e3640f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.110581 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4nx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f968882a-8e40-4760-b1fe-2d456390d30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a047e7a57cedd76d9c1585aa1a50f66fc1ace45b59f9d32e284d44c940a79f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swvtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4nx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.122744 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6602c86-f236-4772-b70f-a8b4847b95dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363ed58e5d7ec642280fa4eeb3b84af899499279e5cca2f44a3762721b45758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a062edfbf637aa7eed0473b03b10011ed3e0b8647ca701c9b67bd16abe6fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b842j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.141150 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60953d56-8dc2-4adf-96bc-078a558476e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8504c36033083a61ef0f41fbba999a505d9d387e5c23264a35125fd50e69bf2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nhpkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.154825 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad68d98-2014-4595-bfa1-3c2d01ea9c1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ee0d4bdb8f37f7bbb7a130615b9df7a612fbcc74aadfb5eeb319935d50616d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da94f34ed139edbb39d3e777c72192e4d6d444b191e109b4c72ccb945513a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe6f4f8a090678c9eb76141cc1896ce9c032c026e768c884350cb64dace63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a3d5b07a894ad5cb3895f0d7890997de76f76e5dd6e4cf0d09096420e17a2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.167624 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ktf4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"106da5aa-5f2e-4d32-b172-4844ad6de7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6c6b9ea2222959f244be8e386750c0d7bdd1a2f340aca554cc8b990c3907ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f51a4a2b3b1ac421d5524a3776981e28eea2f2ae9d1e78977c96cc2f0faa894\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T00:16:17Z\\\",\\\"message\\\":\\\"2026-02-03T00:15:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_875157a4-3182-4f2f-8108-7a7fc380944d\\\\n2026-02-03T00:15:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_875157a4-3182-4f2f-8108-7a7fc380944d to /host/opt/cni/bin/\\\\n2026-02-03T00:15:32Z [verbose] multus-daemon started\\\\n2026-02-03T00:15:32Z [verbose] Readiness Indicator file check\\\\n2026-02-03T00:16:17Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:16:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4dpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ktf4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.169067 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.169175 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.169241 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.169305 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.169367 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:29Z","lastTransitionTime":"2026-02-03T00:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.181249 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j6wfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db2cbb1b-24c0-405a-ae34-2c1b4af6f999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1b3ce70bb8db498a796258ba1561e3e7fc260a82a2649e2e1a3e4132bcfe4d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89dk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd3875815c505e550007aa406ff23c56ac34176838363e4f894202ad69882389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89dk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j6wfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.192916 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hzk9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"039e204d-4d36-471e-990f-4eb5b4a193fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4pnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4pnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hzk9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.272043 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.272091 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.272102 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.272122 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.272135 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:29Z","lastTransitionTime":"2026-02-03T00:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.374825 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.374863 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.374872 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.374886 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.374896 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:29Z","lastTransitionTime":"2026-02-03T00:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.431116 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gzlj4_b71790a2-e390-400a-a288-2a3af8467047/ovnkube-controller/3.log" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.436285 4798 scope.go:117] "RemoveContainer" containerID="543ce215587d0745d59fb1c3a87d4fd86d500257ac93b2b61d7cec602dd61102" Feb 03 00:16:29 crc kubenswrapper[4798]: E0203 00:16:29.436836 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gzlj4_openshift-ovn-kubernetes(b71790a2-e390-400a-a288-2a3af8467047)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" podUID="b71790a2-e390-400a-a288-2a3af8467047" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.450234 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84295e9a-e49f-4550-8b66-c13c1d9a31ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e69941f257b4b15956eceee179fe4480d582215ae076f2757f5ead357c5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a58ca61528f9b3bf28c062c16d1b924e68f3a7bc60438abbb9e6d726023b713b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75e6cbd2a5c8ecdfeb505388b2c00877ab436a5434a9ffb2c15fd1a6cbde36b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b848412f3f729b0be1246c7aa6d61d8a64e93fdbdaa5899e447dd24adb39f185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b416e7549939712a710931ff8eeed3b360c4a52f48ca3e003d48be6a9dde5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.464717 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bb16c3599d6149d6981f97d083ee866ab1f157a08b1ba9825442920b5af8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.478908 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8mqs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a80722df-c977-49e0-b1ec-a83fea1c4f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6c76bd68818c8e42c934e6a93bcffb93289d3993a975f8490ce5dc2d7db601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwh9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8mqs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.479911 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.479989 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.480016 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.480046 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.480070 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:29Z","lastTransitionTime":"2026-02-03T00:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.510696 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d9827e9-9254-47d3-8104-063cecf114b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6167533a6a616c137b890a0878d4ca54de45315ddf8799777a78b065f287f8f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5aaaf6e417770868b0d07795a8dce7a762fff69be8f67fa8b18f2d725b3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://174155cc26bc82aa227742b9171a1bdc7e162fc52f673214b383d6fe29d6171e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78de22de18e933207990153b342930848b80ccb2a9f4111a17e195a988d4b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5985e1d80e9ba103ce9794b795647710c64f0bdf9f6b6c7ca7e3b1cc4bd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d07a1f2a3582ac7c723fd413f349af0e4349811487cc43a37168ccec44f60d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c346372276be992164ec7d9439e35303d2f57a1ade21585088db07bf9824488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a6f759f7e36d14618b258bb2202eeeb224443cbb69699b0c9eed894d29c63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.530571 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.549443 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da2defc8ae8b05759890f95063d7f6820ae261639188fbf19290f98cfcc080de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5cc605f6582cf6832be9a756695825774f5ce389556957d9769286d0e3640f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.564318 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4nx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f968882a-8e40-4760-b1fe-2d456390d30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a047e7a57cedd76d9c1585aa1a50f66fc1ace45b59f9d32e284d44c940a79f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swvtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4nx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.581169 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6602c86-f236-4772-b70f-a8b4847b95dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363ed58e5d7ec642280fa4eeb3b84af899499279e5cca2f44a3762721b45758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a062edfbf637aa7eed0473b03b10011ed3e0b8647ca701c9b67bd16abe6fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrcdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b842j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.582910 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.582955 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.582967 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.582984 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.582997 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:29Z","lastTransitionTime":"2026-02-03T00:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.605493 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60953d56-8dc2-4adf-96bc-078a558476e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8504c36033083a61ef0f41fbba999a505d9d387e5c23264a35125fd50e69bf2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54fefced1eac1c6e1024bd18508a89c32909e7a63e7a3f25df2d96b1118a9f74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://602a550d86d16938b28f8a19bd0c0a100eb7ed21f53b638c7e28041afd44842b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf10b88bbfd82e85e0d89dfe04ab2c1d842ca18b72951206f0c5d45cd08f1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723bfaada12d856de099b280840c96e58ff2d85848b6413085b241d28006cb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f5d664917ca71d6fe0b6d106321163216d0367eadcc72b2a4e45b4824530a79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50cf8c273d968ebd9f95204413552968de48e903c49ac8016079aaebfa15b33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d58sd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nhpkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.628437 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71790a2-e390-400a-a288-2a3af8467047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://543ce215587d0745d59fb1c3a87d4fd86d500257ac93b2b61d7cec602dd61102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://543ce215587d0745d59fb1c3a87d4fd86d500257ac93b2b61d7cec602dd61102\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T00:16:28Z\\\",\\\"message\\\":\\\"s/informers/externalversions/factory.go:140\\\\nI0203 00:16:28.140958 6814 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0203 00:16:28.141133 6814 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 00:16:28.141221 6814 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 00:16:28.141873 6814 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0203 00:16:28.141926 6814 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0203 00:16:28.141987 6814 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0203 00:16:28.142050 6814 factory.go:656] Stopping watch factory\\\\nI0203 00:16:28.142070 6814 handler.go:208] Removed *v1.Node event handler 2\\\\nI0203 00:16:28.164594 6814 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0203 00:16:28.164632 6814 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0203 00:16:28.164739 6814 ovnkube.go:599] Stopped ovnkube\\\\nI0203 00:16:28.164775 6814 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0203 00:16:28.164873 6814 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T00:16:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gzlj4_openshift-ovn-kubernetes(b71790a2-e390-400a-a288-2a3af8467047)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxchl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gzlj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.644900 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2e011-9d28-486c-8271-62832c2c6324\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bebae4dacf97b2b134bfc60501e61d43918c37ae414740771908db0afe0195ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b3142e9893f22bf03ac0f2935798c4cfbfadea4911f655023634b4cf61a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6400ae64d5cc6c1f49d285bf4274e92e3905ce52d43e3938f6c99e583f90df21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ece09a4762670459423ed39e563099723cdbb52f78b4edaae6f80ec5eb33624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ece09a4762670459423ed39e563099723cdbb52f78b4edaae6f80ec5eb33624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T00:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.661160 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ktf4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"106da5aa-5f2e-4d32-b172-4844ad6de7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6c6b9ea2222959f244be8e386750c0d7bdd1a2f340aca554cc8b990c3907ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f51a4a2b3b1ac421d5524a3776981e28eea2f2ae9d1e78977c96cc2f0faa894\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T00:16:17Z\\\",\\\"message\\\":\\\"2026-02-03T00:15:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_875157a4-3182-4f2f-8108-7a7fc380944d\\\\n2026-02-03T00:15:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_875157a4-3182-4f2f-8108-7a7fc380944d to /host/opt/cni/bin/\\\\n2026-02-03T00:15:32Z [verbose] multus-daemon started\\\\n2026-02-03T00:15:32Z [verbose] Readiness Indicator file check\\\\n2026-02-03T00:16:17Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:16:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4dpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ktf4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.672864 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j6wfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db2cbb1b-24c0-405a-ae34-2c1b4af6f999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1b3ce70bb8db498a796258ba1561e3e7fc260a82a2649e2e1a3e4132bcfe4d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89dk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd3875815c505e550007aa406ff23c56ac34176838363e4f894202ad69882389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89dk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j6wfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.689341 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hzk9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"039e204d-4d36-471e-990f-4eb5b4a193fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4pnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4pnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hzk9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.690684 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.690753 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.690774 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.690799 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.690817 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:29Z","lastTransitionTime":"2026-02-03T00:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.705237 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad68d98-2014-4595-bfa1-3c2d01ea9c1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ee0d4bdb8f37f7bbb7a130615b9df7a612fbcc74aadfb5eeb319935d50616d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da94f34ed139edbb39d3e777c72192e4d6d444b191e109b4c72ccb945513a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe6f4f8a090678c9eb76141cc1896ce9c032c026e768c884350cb64dace63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a3d5b07a894ad5cb3895f0d7890997de76f76e5dd6e4cf0d09096420e17a2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T00:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.717150 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9f9dfba8a37fa4a39f59f8b4a7cdc9c8493d91a2aef18275fae7d7989f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T00:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.727704 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.738814 4798 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T00:15:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T00:16:29Z is after 2025-08-24T17:21:41Z" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.792977 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.793005 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.793015 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.793028 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.793038 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:29Z","lastTransitionTime":"2026-02-03T00:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.895853 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.895932 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.895959 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.895989 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.896014 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:29Z","lastTransitionTime":"2026-02-03T00:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.907352 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.907365 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.907358 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:16:29 crc kubenswrapper[4798]: E0203 00:16:29.907461 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:16:29 crc kubenswrapper[4798]: E0203 00:16:29.907747 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:16:29 crc kubenswrapper[4798]: E0203 00:16:29.907783 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.915730 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 22:39:55.85190331 +0000 UTC Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.999372 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.999417 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.999429 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.999447 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:29 crc kubenswrapper[4798]: I0203 00:16:29.999465 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:29Z","lastTransitionTime":"2026-02-03T00:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.102539 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.102608 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.102634 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.102701 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.102727 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:30Z","lastTransitionTime":"2026-02-03T00:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.206066 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.206124 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.206144 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.206167 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.206185 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:30Z","lastTransitionTime":"2026-02-03T00:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.309177 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.309253 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.309276 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.309306 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.309329 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:30Z","lastTransitionTime":"2026-02-03T00:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.412983 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.413034 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.413052 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.413075 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.413093 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:30Z","lastTransitionTime":"2026-02-03T00:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.516769 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.516855 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.516874 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.516902 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.516927 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:30Z","lastTransitionTime":"2026-02-03T00:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.620426 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.620477 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.620488 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.620508 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.620522 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:30Z","lastTransitionTime":"2026-02-03T00:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.725470 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.725528 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.725539 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.725556 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.725568 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:30Z","lastTransitionTime":"2026-02-03T00:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.828775 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.828849 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.828866 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.828889 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.828908 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:30Z","lastTransitionTime":"2026-02-03T00:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.907935 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:16:30 crc kubenswrapper[4798]: E0203 00:16:30.908131 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.916245 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 20:59:50.121202799 +0000 UTC Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.931391 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.931474 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.931499 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.931529 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:30 crc kubenswrapper[4798]: I0203 00:16:30.931552 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:30Z","lastTransitionTime":"2026-02-03T00:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.034110 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.034179 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.034187 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.034199 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.034207 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:31Z","lastTransitionTime":"2026-02-03T00:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.138242 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.138338 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.138357 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.138381 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.138401 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:31Z","lastTransitionTime":"2026-02-03T00:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.241522 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.241587 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.241611 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.241637 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.241688 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:31Z","lastTransitionTime":"2026-02-03T00:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.344627 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.344704 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.344717 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.344732 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.344744 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:31Z","lastTransitionTime":"2026-02-03T00:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.447033 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.447069 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.447080 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.447094 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.447104 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:31Z","lastTransitionTime":"2026-02-03T00:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.550345 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.550404 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.550422 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.550446 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.550462 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:31Z","lastTransitionTime":"2026-02-03T00:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.653951 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.654009 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.654027 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.654050 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.654070 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:31Z","lastTransitionTime":"2026-02-03T00:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.756292 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.756357 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.756375 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.756400 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.756417 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:31Z","lastTransitionTime":"2026-02-03T00:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.839137 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.839342 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:16:31 crc kubenswrapper[4798]: E0203 00:16:31.839377 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:17:35.839331937 +0000 UTC m=+147.605321998 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:16:31 crc kubenswrapper[4798]: E0203 00:16:31.839486 4798 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 00:16:31 crc kubenswrapper[4798]: E0203 00:16:31.839558 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 00:17:35.839537032 +0000 UTC m=+147.605527083 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.839553 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:16:31 crc kubenswrapper[4798]: E0203 00:16:31.839995 4798 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 00:16:31 crc kubenswrapper[4798]: E0203 00:16:31.840156 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 00:17:35.840115387 +0000 UTC m=+147.606105438 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.858812 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.858861 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.858872 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.858887 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.858901 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:31Z","lastTransitionTime":"2026-02-03T00:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.908042 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:16:31 crc kubenswrapper[4798]: E0203 00:16:31.908170 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.908205 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.908260 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:16:31 crc kubenswrapper[4798]: E0203 00:16:31.908347 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:16:31 crc kubenswrapper[4798]: E0203 00:16:31.908558 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.916450 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 09:29:37.180258918 +0000 UTC Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.941279 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.941382 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:16:31 crc kubenswrapper[4798]: E0203 00:16:31.941427 4798 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 00:16:31 crc kubenswrapper[4798]: E0203 00:16:31.941448 4798 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 00:16:31 crc kubenswrapper[4798]: E0203 00:16:31.941459 4798 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 00:16:31 crc kubenswrapper[4798]: E0203 00:16:31.941513 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 00:17:35.941498345 +0000 UTC m=+147.707488366 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 00:16:31 crc kubenswrapper[4798]: E0203 00:16:31.941533 4798 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 00:16:31 crc kubenswrapper[4798]: E0203 00:16:31.941551 4798 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 00:16:31 crc kubenswrapper[4798]: E0203 00:16:31.941567 4798 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 00:16:31 crc kubenswrapper[4798]: E0203 00:16:31.941615 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 00:17:35.941598839 +0000 UTC m=+147.707588860 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.960922 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.961021 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.961041 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.961062 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:31 crc kubenswrapper[4798]: I0203 00:16:31.961079 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:31Z","lastTransitionTime":"2026-02-03T00:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.063821 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.063862 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.063872 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.063889 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.063900 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:32Z","lastTransitionTime":"2026-02-03T00:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.166731 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.166783 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.166799 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.166821 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.166840 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:32Z","lastTransitionTime":"2026-02-03T00:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.269773 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.270152 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.270349 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.270505 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.270651 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:32Z","lastTransitionTime":"2026-02-03T00:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.373072 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.373107 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.373120 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.373134 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.373143 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:32Z","lastTransitionTime":"2026-02-03T00:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.477709 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.477784 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.477800 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.477821 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.477832 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:32Z","lastTransitionTime":"2026-02-03T00:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.580809 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.580878 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.580902 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.580930 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.580951 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:32Z","lastTransitionTime":"2026-02-03T00:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.683996 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.684034 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.684045 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.684095 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.684121 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:32Z","lastTransitionTime":"2026-02-03T00:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.786316 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.786355 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.786365 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.786379 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.786389 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:32Z","lastTransitionTime":"2026-02-03T00:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.889170 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.889498 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.889735 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.889940 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.890141 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:32Z","lastTransitionTime":"2026-02-03T00:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.907920 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:16:32 crc kubenswrapper[4798]: E0203 00:16:32.908303 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.917007 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 22:05:12.953245947 +0000 UTC Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.993134 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.993204 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.993228 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.993252 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:32 crc kubenswrapper[4798]: I0203 00:16:32.993269 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:32Z","lastTransitionTime":"2026-02-03T00:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.095633 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.097122 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.097607 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.098144 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.098484 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:33Z","lastTransitionTime":"2026-02-03T00:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.202338 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.202410 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.202433 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.202464 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.202488 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:33Z","lastTransitionTime":"2026-02-03T00:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.305538 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.305598 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.305612 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.305629 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.305642 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:33Z","lastTransitionTime":"2026-02-03T00:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.409312 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.409368 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.409384 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.409402 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.409416 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:33Z","lastTransitionTime":"2026-02-03T00:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.512551 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.512873 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.512966 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.513054 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.513129 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:33Z","lastTransitionTime":"2026-02-03T00:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.616461 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.616536 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.616560 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.616583 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.616600 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:33Z","lastTransitionTime":"2026-02-03T00:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.719379 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.719455 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.719473 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.719496 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.719515 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:33Z","lastTransitionTime":"2026-02-03T00:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.823128 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.823186 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.823207 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.823229 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.823262 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:33Z","lastTransitionTime":"2026-02-03T00:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.907777 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.907959 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:16:33 crc kubenswrapper[4798]: E0203 00:16:33.908284 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.908723 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:16:33 crc kubenswrapper[4798]: E0203 00:16:33.908923 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:16:33 crc kubenswrapper[4798]: E0203 00:16:33.909068 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.917845 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 14:52:11.667306243 +0000 UTC Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.925970 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.925999 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.926009 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.926021 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:33 crc kubenswrapper[4798]: I0203 00:16:33.926030 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:33Z","lastTransitionTime":"2026-02-03T00:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.029582 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.029642 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.029697 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.029728 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.029751 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:34Z","lastTransitionTime":"2026-02-03T00:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.134962 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.135352 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.135375 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.135400 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.135420 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:34Z","lastTransitionTime":"2026-02-03T00:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.237629 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.237692 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.237708 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.237724 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.237733 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:34Z","lastTransitionTime":"2026-02-03T00:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.340146 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.340176 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.340184 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.340195 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.340204 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:34Z","lastTransitionTime":"2026-02-03T00:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.442827 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.442923 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.442954 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.442979 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.442999 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:34Z","lastTransitionTime":"2026-02-03T00:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.546617 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.546704 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.546724 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.546748 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.546768 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:34Z","lastTransitionTime":"2026-02-03T00:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.648964 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.649031 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.649045 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.649060 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.649071 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:34Z","lastTransitionTime":"2026-02-03T00:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.751030 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.751064 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.751080 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.751095 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.751106 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:34Z","lastTransitionTime":"2026-02-03T00:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.853937 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.854173 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.854240 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.854303 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.854369 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:34Z","lastTransitionTime":"2026-02-03T00:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.907350 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:16:34 crc kubenswrapper[4798]: E0203 00:16:34.907880 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.919078 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 04:36:48.687128401 +0000 UTC Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.957112 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.957250 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.957332 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.957370 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:34 crc kubenswrapper[4798]: I0203 00:16:34.957447 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:34Z","lastTransitionTime":"2026-02-03T00:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.060870 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.060943 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.060968 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.061012 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.061037 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:35Z","lastTransitionTime":"2026-02-03T00:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.163795 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.163844 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.163853 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.163867 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.163875 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:35Z","lastTransitionTime":"2026-02-03T00:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.267721 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.267794 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.267806 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.267827 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.267843 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:35Z","lastTransitionTime":"2026-02-03T00:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.371009 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.371073 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.371085 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.371106 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.371121 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:35Z","lastTransitionTime":"2026-02-03T00:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.473915 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.473979 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.473994 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.474024 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.474036 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:35Z","lastTransitionTime":"2026-02-03T00:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.576243 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.576281 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.576293 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.576308 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.576317 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:35Z","lastTransitionTime":"2026-02-03T00:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.678784 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.678848 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.678864 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.678886 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.678902 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:35Z","lastTransitionTime":"2026-02-03T00:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.781404 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.781439 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.781447 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.781460 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.781469 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:35Z","lastTransitionTime":"2026-02-03T00:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.883537 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.883568 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.883593 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.883606 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.883617 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:35Z","lastTransitionTime":"2026-02-03T00:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.907522 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.907596 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:16:35 crc kubenswrapper[4798]: E0203 00:16:35.907626 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.907821 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:16:35 crc kubenswrapper[4798]: E0203 00:16:35.907815 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:16:35 crc kubenswrapper[4798]: E0203 00:16:35.907878 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.919851 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 00:44:51.3752183 +0000 UTC Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.985932 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.985960 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.985968 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.985979 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:35 crc kubenswrapper[4798]: I0203 00:16:35.985989 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:35Z","lastTransitionTime":"2026-02-03T00:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:36 crc kubenswrapper[4798]: I0203 00:16:36.029966 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 00:16:36 crc kubenswrapper[4798]: I0203 00:16:36.030040 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 00:16:36 crc kubenswrapper[4798]: I0203 00:16:36.030050 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 00:16:36 crc kubenswrapper[4798]: I0203 00:16:36.030062 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 00:16:36 crc kubenswrapper[4798]: I0203 00:16:36.030071 4798 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T00:16:36Z","lastTransitionTime":"2026-02-03T00:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 00:16:36 crc kubenswrapper[4798]: I0203 00:16:36.087859 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-2lh2l"] Feb 03 00:16:36 crc kubenswrapper[4798]: I0203 00:16:36.088434 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2lh2l" Feb 03 00:16:36 crc kubenswrapper[4798]: I0203 00:16:36.093219 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 03 00:16:36 crc kubenswrapper[4798]: I0203 00:16:36.093263 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 03 00:16:36 crc kubenswrapper[4798]: I0203 00:16:36.093700 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 03 00:16:36 crc kubenswrapper[4798]: I0203 00:16:36.094945 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 03 00:16:36 crc kubenswrapper[4798]: I0203 00:16:36.171556 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=70.171530698 podStartE2EDuration="1m10.171530698s" podCreationTimestamp="2026-02-03 00:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:16:36.153674931 +0000 UTC m=+87.919664972" watchObservedRunningTime="2026-02-03 00:16:36.171530698 +0000 UTC m=+87.937520729" Feb 03 00:16:36 crc kubenswrapper[4798]: I0203 00:16:36.187306 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-t8mqs" podStartSLOduration=67.187279809 podStartE2EDuration="1m7.187279809s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:16:36.187235428 +0000 UTC m=+87.953225479" watchObservedRunningTime="2026-02-03 00:16:36.187279809 +0000 UTC m=+87.953269830" Feb 03 00:16:36 crc kubenswrapper[4798]: I0203 00:16:36.187826 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3d312059-c41b-4c4c-a4e8-73894f579f7d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-2lh2l\" (UID: \"3d312059-c41b-4c4c-a4e8-73894f579f7d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2lh2l" Feb 03 00:16:36 crc kubenswrapper[4798]: I0203 00:16:36.187883 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d312059-c41b-4c4c-a4e8-73894f579f7d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-2lh2l\" (UID: \"3d312059-c41b-4c4c-a4e8-73894f579f7d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2lh2l" Feb 03 00:16:36 crc kubenswrapper[4798]: I0203 00:16:36.187916 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3d312059-c41b-4c4c-a4e8-73894f579f7d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-2lh2l\" (UID: \"3d312059-c41b-4c4c-a4e8-73894f579f7d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2lh2l" Feb 03 00:16:36 crc kubenswrapper[4798]: I0203 00:16:36.188045 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3d312059-c41b-4c4c-a4e8-73894f579f7d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-2lh2l\" (UID: \"3d312059-c41b-4c4c-a4e8-73894f579f7d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2lh2l" Feb 03 00:16:36 crc kubenswrapper[4798]: I0203 00:16:36.188085 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d312059-c41b-4c4c-a4e8-73894f579f7d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-2lh2l\" (UID: \"3d312059-c41b-4c4c-a4e8-73894f579f7d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2lh2l" Feb 03 00:16:36 crc kubenswrapper[4798]: I0203 00:16:36.258069 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=69.258048809 podStartE2EDuration="1m9.258048809s" podCreationTimestamp="2026-02-03 00:15:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:16:36.257057203 +0000 UTC m=+88.023047214" watchObservedRunningTime="2026-02-03 00:16:36.258048809 +0000 UTC m=+88.024038830" Feb 03 00:16:36 crc kubenswrapper[4798]: I0203 00:16:36.288485 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3d312059-c41b-4c4c-a4e8-73894f579f7d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-2lh2l\" (UID: \"3d312059-c41b-4c4c-a4e8-73894f579f7d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2lh2l" Feb 03 00:16:36 crc kubenswrapper[4798]: I0203 00:16:36.288531 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d312059-c41b-4c4c-a4e8-73894f579f7d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-2lh2l\" (UID: \"3d312059-c41b-4c4c-a4e8-73894f579f7d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2lh2l" Feb 03 00:16:36 crc kubenswrapper[4798]: I0203 00:16:36.288565 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3d312059-c41b-4c4c-a4e8-73894f579f7d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-2lh2l\" (UID: \"3d312059-c41b-4c4c-a4e8-73894f579f7d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2lh2l" Feb 03 00:16:36 crc kubenswrapper[4798]: I0203 00:16:36.288591 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d312059-c41b-4c4c-a4e8-73894f579f7d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-2lh2l\" (UID: \"3d312059-c41b-4c4c-a4e8-73894f579f7d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2lh2l" Feb 03 00:16:36 crc kubenswrapper[4798]: I0203 00:16:36.288612 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3d312059-c41b-4c4c-a4e8-73894f579f7d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-2lh2l\" (UID: \"3d312059-c41b-4c4c-a4e8-73894f579f7d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2lh2l" Feb 03 00:16:36 crc kubenswrapper[4798]: I0203 00:16:36.288699 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3d312059-c41b-4c4c-a4e8-73894f579f7d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-2lh2l\" (UID: \"3d312059-c41b-4c4c-a4e8-73894f579f7d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2lh2l" Feb 03 00:16:36 crc kubenswrapper[4798]: I0203 00:16:36.289030 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3d312059-c41b-4c4c-a4e8-73894f579f7d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-2lh2l\" (UID: \"3d312059-c41b-4c4c-a4e8-73894f579f7d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2lh2l" Feb 03 00:16:36 crc kubenswrapper[4798]: I0203 00:16:36.290535 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3d312059-c41b-4c4c-a4e8-73894f579f7d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-2lh2l\" (UID: \"3d312059-c41b-4c4c-a4e8-73894f579f7d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2lh2l" Feb 03 00:16:36 crc kubenswrapper[4798]: I0203 00:16:36.300304 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d312059-c41b-4c4c-a4e8-73894f579f7d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-2lh2l\" (UID: \"3d312059-c41b-4c4c-a4e8-73894f579f7d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2lh2l" Feb 03 00:16:36 crc kubenswrapper[4798]: I0203 00:16:36.301958 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-4nx5v" podStartSLOduration=68.301936831 podStartE2EDuration="1m8.301936831s" podCreationTimestamp="2026-02-03 00:15:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:16:36.301921771 +0000 UTC m=+88.067911772" watchObservedRunningTime="2026-02-03 00:16:36.301936831 +0000 UTC m=+88.067926862" Feb 03 00:16:36 crc kubenswrapper[4798]: I0203 00:16:36.310258 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d312059-c41b-4c4c-a4e8-73894f579f7d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-2lh2l\" (UID: \"3d312059-c41b-4c4c-a4e8-73894f579f7d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2lh2l" Feb 03 00:16:36 crc kubenswrapper[4798]: I0203 00:16:36.316628 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-b842j" podStartSLOduration=67.316604863 podStartE2EDuration="1m7.316604863s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:16:36.315955436 +0000 UTC m=+88.081945447" watchObservedRunningTime="2026-02-03 00:16:36.316604863 +0000 UTC m=+88.082594874" Feb 03 00:16:36 crc kubenswrapper[4798]: I0203 00:16:36.335347 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-nhpkc" podStartSLOduration=67.335328604 podStartE2EDuration="1m7.335328604s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:16:36.334811349 +0000 UTC m=+88.100801370" watchObservedRunningTime="2026-02-03 00:16:36.335328604 +0000 UTC m=+88.101318615" Feb 03 00:16:36 crc kubenswrapper[4798]: I0203 00:16:36.380946 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=35.380923271 podStartE2EDuration="35.380923271s" podCreationTimestamp="2026-02-03 00:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:16:36.380322705 +0000 UTC m=+88.146312746" watchObservedRunningTime="2026-02-03 00:16:36.380923271 +0000 UTC m=+88.146913302" Feb 03 00:16:36 crc kubenswrapper[4798]: I0203 00:16:36.394962 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-ktf4c" podStartSLOduration=67.394942405 podStartE2EDuration="1m7.394942405s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:16:36.39474811 +0000 UTC m=+88.160738161" watchObservedRunningTime="2026-02-03 00:16:36.394942405 +0000 UTC m=+88.160932416" Feb 03 00:16:36 crc kubenswrapper[4798]: I0203 00:16:36.407641 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j6wfm" podStartSLOduration=67.407625444 podStartE2EDuration="1m7.407625444s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:16:36.407569403 +0000 UTC m=+88.173559404" watchObservedRunningTime="2026-02-03 00:16:36.407625444 +0000 UTC m=+88.173615455" Feb 03 00:16:36 crc kubenswrapper[4798]: I0203 00:16:36.411715 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2lh2l" Feb 03 00:16:36 crc kubenswrapper[4798]: W0203 00:16:36.427454 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d312059_c41b_4c4c_a4e8_73894f579f7d.slice/crio-32678a754e4987ab831847fc5201d968c3101b8c6ad4cf96659c447c7c7fb506 WatchSource:0}: Error finding container 32678a754e4987ab831847fc5201d968c3101b8c6ad4cf96659c447c7c7fb506: Status 404 returned error can't find the container with id 32678a754e4987ab831847fc5201d968c3101b8c6ad4cf96659c447c7c7fb506 Feb 03 00:16:36 crc kubenswrapper[4798]: I0203 00:16:36.464921 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2lh2l" event={"ID":"3d312059-c41b-4c4c-a4e8-73894f579f7d","Type":"ContainerStarted","Data":"32678a754e4987ab831847fc5201d968c3101b8c6ad4cf96659c447c7c7fb506"} Feb 03 00:16:36 crc kubenswrapper[4798]: I0203 00:16:36.907719 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:16:36 crc kubenswrapper[4798]: E0203 00:16:36.907936 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:16:36 crc kubenswrapper[4798]: I0203 00:16:36.920316 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 14:53:05.076071775 +0000 UTC Feb 03 00:16:36 crc kubenswrapper[4798]: I0203 00:16:36.920415 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 03 00:16:36 crc kubenswrapper[4798]: I0203 00:16:36.930307 4798 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 03 00:16:37 crc kubenswrapper[4798]: I0203 00:16:37.469996 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2lh2l" event={"ID":"3d312059-c41b-4c4c-a4e8-73894f579f7d","Type":"ContainerStarted","Data":"3434d2d63d2f6b6ec47616792bd6f86ecb8882c7b7462757714bcafe5c666ed4"} Feb 03 00:16:37 crc kubenswrapper[4798]: I0203 00:16:37.487215 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=69.487183541 podStartE2EDuration="1m9.487183541s" podCreationTimestamp="2026-02-03 00:15:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:16:36.435075948 +0000 UTC m=+88.201065959" watchObservedRunningTime="2026-02-03 00:16:37.487183541 +0000 UTC m=+89.253173602" Feb 03 00:16:37 crc kubenswrapper[4798]: I0203 00:16:37.487798 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2lh2l" podStartSLOduration=68.487781828 podStartE2EDuration="1m8.487781828s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:16:37.487380546 +0000 UTC m=+89.253370557" watchObservedRunningTime="2026-02-03 00:16:37.487781828 +0000 UTC m=+89.253771879" Feb 03 00:16:37 crc kubenswrapper[4798]: I0203 00:16:37.908287 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:16:37 crc kubenswrapper[4798]: I0203 00:16:37.908387 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:16:37 crc kubenswrapper[4798]: I0203 00:16:37.908327 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:16:37 crc kubenswrapper[4798]: E0203 00:16:37.908538 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:16:37 crc kubenswrapper[4798]: E0203 00:16:37.908693 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:16:37 crc kubenswrapper[4798]: E0203 00:16:37.908935 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:16:37 crc kubenswrapper[4798]: I0203 00:16:37.939586 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 03 00:16:38 crc kubenswrapper[4798]: I0203 00:16:38.907261 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:16:38 crc kubenswrapper[4798]: E0203 00:16:38.908503 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:16:38 crc kubenswrapper[4798]: I0203 00:16:38.922584 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=1.922553902 podStartE2EDuration="1.922553902s" podCreationTimestamp="2026-02-03 00:16:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:16:38.921814933 +0000 UTC m=+90.687805014" watchObservedRunningTime="2026-02-03 00:16:38.922553902 +0000 UTC m=+90.688543953" Feb 03 00:16:39 crc kubenswrapper[4798]: I0203 00:16:39.908162 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:16:39 crc kubenswrapper[4798]: I0203 00:16:39.908333 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:16:39 crc kubenswrapper[4798]: E0203 00:16:39.908519 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:16:39 crc kubenswrapper[4798]: I0203 00:16:39.908565 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:16:39 crc kubenswrapper[4798]: E0203 00:16:39.908760 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:16:39 crc kubenswrapper[4798]: E0203 00:16:39.908871 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:16:40 crc kubenswrapper[4798]: I0203 00:16:40.907394 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:16:40 crc kubenswrapper[4798]: E0203 00:16:40.907559 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:16:41 crc kubenswrapper[4798]: I0203 00:16:41.907414 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:16:41 crc kubenswrapper[4798]: I0203 00:16:41.907492 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:16:41 crc kubenswrapper[4798]: E0203 00:16:41.907566 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:16:41 crc kubenswrapper[4798]: E0203 00:16:41.907601 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:16:41 crc kubenswrapper[4798]: I0203 00:16:41.907708 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:16:41 crc kubenswrapper[4798]: E0203 00:16:41.908093 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:16:41 crc kubenswrapper[4798]: I0203 00:16:41.908346 4798 scope.go:117] "RemoveContainer" containerID="543ce215587d0745d59fb1c3a87d4fd86d500257ac93b2b61d7cec602dd61102" Feb 03 00:16:41 crc kubenswrapper[4798]: E0203 00:16:41.908485 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gzlj4_openshift-ovn-kubernetes(b71790a2-e390-400a-a288-2a3af8467047)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" podUID="b71790a2-e390-400a-a288-2a3af8467047" Feb 03 00:16:42 crc kubenswrapper[4798]: I0203 00:16:42.907395 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:16:42 crc kubenswrapper[4798]: E0203 00:16:42.907537 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:16:43 crc kubenswrapper[4798]: I0203 00:16:43.907055 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:16:43 crc kubenswrapper[4798]: I0203 00:16:43.907094 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:16:43 crc kubenswrapper[4798]: I0203 00:16:43.907064 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:16:43 crc kubenswrapper[4798]: E0203 00:16:43.907562 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:16:43 crc kubenswrapper[4798]: E0203 00:16:43.907676 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:16:43 crc kubenswrapper[4798]: E0203 00:16:43.907746 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:16:44 crc kubenswrapper[4798]: I0203 00:16:44.908318 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:16:44 crc kubenswrapper[4798]: E0203 00:16:44.908518 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:16:45 crc kubenswrapper[4798]: I0203 00:16:45.907796 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:16:45 crc kubenswrapper[4798]: I0203 00:16:45.907918 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:16:45 crc kubenswrapper[4798]: I0203 00:16:45.908419 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:16:45 crc kubenswrapper[4798]: E0203 00:16:45.908542 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:16:45 crc kubenswrapper[4798]: E0203 00:16:45.908716 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:16:45 crc kubenswrapper[4798]: E0203 00:16:45.908972 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:16:46 crc kubenswrapper[4798]: I0203 00:16:46.907420 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:16:46 crc kubenswrapper[4798]: E0203 00:16:46.908118 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:16:47 crc kubenswrapper[4798]: I0203 00:16:47.552310 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/039e204d-4d36-471e-990f-4eb5b4a193fc-metrics-certs\") pod \"network-metrics-daemon-hzk9m\" (UID: \"039e204d-4d36-471e-990f-4eb5b4a193fc\") " pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:16:47 crc kubenswrapper[4798]: E0203 00:16:47.552485 4798 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 00:16:47 crc kubenswrapper[4798]: E0203 00:16:47.552588 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/039e204d-4d36-471e-990f-4eb5b4a193fc-metrics-certs podName:039e204d-4d36-471e-990f-4eb5b4a193fc nodeName:}" failed. No retries permitted until 2026-02-03 00:17:51.552561446 +0000 UTC m=+163.318551487 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/039e204d-4d36-471e-990f-4eb5b4a193fc-metrics-certs") pod "network-metrics-daemon-hzk9m" (UID: "039e204d-4d36-471e-990f-4eb5b4a193fc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 00:16:47 crc kubenswrapper[4798]: I0203 00:16:47.907160 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:16:47 crc kubenswrapper[4798]: E0203 00:16:47.907335 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:16:47 crc kubenswrapper[4798]: I0203 00:16:47.907178 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:16:47 crc kubenswrapper[4798]: E0203 00:16:47.907433 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:16:47 crc kubenswrapper[4798]: I0203 00:16:47.907160 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:16:47 crc kubenswrapper[4798]: E0203 00:16:47.907517 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:16:48 crc kubenswrapper[4798]: I0203 00:16:48.908169 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:16:48 crc kubenswrapper[4798]: E0203 00:16:48.910284 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:16:49 crc kubenswrapper[4798]: I0203 00:16:49.907870 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:16:49 crc kubenswrapper[4798]: E0203 00:16:49.908069 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:16:49 crc kubenswrapper[4798]: I0203 00:16:49.908023 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:16:49 crc kubenswrapper[4798]: I0203 00:16:49.908470 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:16:49 crc kubenswrapper[4798]: E0203 00:16:49.908987 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:16:49 crc kubenswrapper[4798]: E0203 00:16:49.909098 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:16:50 crc kubenswrapper[4798]: I0203 00:16:50.908295 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:16:50 crc kubenswrapper[4798]: E0203 00:16:50.908449 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:16:51 crc kubenswrapper[4798]: I0203 00:16:51.907732 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:16:51 crc kubenswrapper[4798]: I0203 00:16:51.907769 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:16:51 crc kubenswrapper[4798]: I0203 00:16:51.907878 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:16:51 crc kubenswrapper[4798]: E0203 00:16:51.908049 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:16:51 crc kubenswrapper[4798]: E0203 00:16:51.908226 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:16:51 crc kubenswrapper[4798]: E0203 00:16:51.908612 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:16:52 crc kubenswrapper[4798]: I0203 00:16:52.908457 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:16:52 crc kubenswrapper[4798]: E0203 00:16:52.908592 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:16:53 crc kubenswrapper[4798]: I0203 00:16:53.907916 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:16:53 crc kubenswrapper[4798]: I0203 00:16:53.907977 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:16:53 crc kubenswrapper[4798]: I0203 00:16:53.908065 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:16:53 crc kubenswrapper[4798]: E0203 00:16:53.908223 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:16:53 crc kubenswrapper[4798]: E0203 00:16:53.908378 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:16:53 crc kubenswrapper[4798]: E0203 00:16:53.908603 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:16:53 crc kubenswrapper[4798]: I0203 00:16:53.909825 4798 scope.go:117] "RemoveContainer" containerID="543ce215587d0745d59fb1c3a87d4fd86d500257ac93b2b61d7cec602dd61102" Feb 03 00:16:53 crc kubenswrapper[4798]: E0203 00:16:53.910236 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gzlj4_openshift-ovn-kubernetes(b71790a2-e390-400a-a288-2a3af8467047)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" podUID="b71790a2-e390-400a-a288-2a3af8467047" Feb 03 00:16:54 crc kubenswrapper[4798]: I0203 00:16:54.907914 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:16:54 crc kubenswrapper[4798]: E0203 00:16:54.908208 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:16:55 crc kubenswrapper[4798]: I0203 00:16:55.908135 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:16:55 crc kubenswrapper[4798]: E0203 00:16:55.908269 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:16:55 crc kubenswrapper[4798]: I0203 00:16:55.908322 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:16:55 crc kubenswrapper[4798]: I0203 00:16:55.908348 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:16:55 crc kubenswrapper[4798]: E0203 00:16:55.908432 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:16:55 crc kubenswrapper[4798]: E0203 00:16:55.908504 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:16:56 crc kubenswrapper[4798]: I0203 00:16:56.908194 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:16:56 crc kubenswrapper[4798]: E0203 00:16:56.908593 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:16:57 crc kubenswrapper[4798]: I0203 00:16:57.907865 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:16:57 crc kubenswrapper[4798]: I0203 00:16:57.907872 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:16:57 crc kubenswrapper[4798]: I0203 00:16:57.907866 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:16:57 crc kubenswrapper[4798]: E0203 00:16:57.908072 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:16:57 crc kubenswrapper[4798]: E0203 00:16:57.908130 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:16:57 crc kubenswrapper[4798]: E0203 00:16:57.908256 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:16:58 crc kubenswrapper[4798]: I0203 00:16:58.908141 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:16:58 crc kubenswrapper[4798]: E0203 00:16:58.909480 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:16:59 crc kubenswrapper[4798]: I0203 00:16:59.907452 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:16:59 crc kubenswrapper[4798]: I0203 00:16:59.907646 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:16:59 crc kubenswrapper[4798]: I0203 00:16:59.907759 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:16:59 crc kubenswrapper[4798]: E0203 00:16:59.907795 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:16:59 crc kubenswrapper[4798]: E0203 00:16:59.908010 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:16:59 crc kubenswrapper[4798]: E0203 00:16:59.908073 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:17:00 crc kubenswrapper[4798]: I0203 00:17:00.907793 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:17:00 crc kubenswrapper[4798]: E0203 00:17:00.908026 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:17:01 crc kubenswrapper[4798]: I0203 00:17:01.907936 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:17:01 crc kubenswrapper[4798]: I0203 00:17:01.907988 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:17:01 crc kubenswrapper[4798]: I0203 00:17:01.908109 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:17:01 crc kubenswrapper[4798]: E0203 00:17:01.908099 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:17:01 crc kubenswrapper[4798]: E0203 00:17:01.908286 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:17:01 crc kubenswrapper[4798]: E0203 00:17:01.908361 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:17:02 crc kubenswrapper[4798]: I0203 00:17:02.907590 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:17:02 crc kubenswrapper[4798]: E0203 00:17:02.907803 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:17:03 crc kubenswrapper[4798]: I0203 00:17:03.907831 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:17:03 crc kubenswrapper[4798]: I0203 00:17:03.907886 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:17:03 crc kubenswrapper[4798]: I0203 00:17:03.908148 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:17:03 crc kubenswrapper[4798]: E0203 00:17:03.907974 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:17:03 crc kubenswrapper[4798]: E0203 00:17:03.908315 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:17:03 crc kubenswrapper[4798]: E0203 00:17:03.908516 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:17:04 crc kubenswrapper[4798]: I0203 00:17:04.564749 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ktf4c_106da5aa-5f2e-4d32-b172-4844ad6de7f6/kube-multus/1.log" Feb 03 00:17:04 crc kubenswrapper[4798]: I0203 00:17:04.565146 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ktf4c_106da5aa-5f2e-4d32-b172-4844ad6de7f6/kube-multus/0.log" Feb 03 00:17:04 crc kubenswrapper[4798]: I0203 00:17:04.565188 4798 generic.go:334] "Generic (PLEG): container finished" podID="106da5aa-5f2e-4d32-b172-4844ad6de7f6" containerID="f6c6b9ea2222959f244be8e386750c0d7bdd1a2f340aca554cc8b990c3907ba7" exitCode=1 Feb 03 00:17:04 crc kubenswrapper[4798]: I0203 00:17:04.565219 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ktf4c" event={"ID":"106da5aa-5f2e-4d32-b172-4844ad6de7f6","Type":"ContainerDied","Data":"f6c6b9ea2222959f244be8e386750c0d7bdd1a2f340aca554cc8b990c3907ba7"} Feb 03 00:17:04 crc kubenswrapper[4798]: I0203 00:17:04.565255 4798 scope.go:117] "RemoveContainer" containerID="8f51a4a2b3b1ac421d5524a3776981e28eea2f2ae9d1e78977c96cc2f0faa894" Feb 03 00:17:04 crc kubenswrapper[4798]: I0203 00:17:04.565775 4798 scope.go:117] "RemoveContainer" containerID="f6c6b9ea2222959f244be8e386750c0d7bdd1a2f340aca554cc8b990c3907ba7" Feb 03 00:17:04 crc kubenswrapper[4798]: E0203 00:17:04.566009 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-ktf4c_openshift-multus(106da5aa-5f2e-4d32-b172-4844ad6de7f6)\"" pod="openshift-multus/multus-ktf4c" podUID="106da5aa-5f2e-4d32-b172-4844ad6de7f6" Feb 03 00:17:04 crc kubenswrapper[4798]: I0203 00:17:04.908047 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:17:04 crc kubenswrapper[4798]: E0203 00:17:04.908470 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:17:05 crc kubenswrapper[4798]: I0203 00:17:05.570317 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ktf4c_106da5aa-5f2e-4d32-b172-4844ad6de7f6/kube-multus/1.log" Feb 03 00:17:05 crc kubenswrapper[4798]: I0203 00:17:05.907774 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:17:05 crc kubenswrapper[4798]: I0203 00:17:05.907882 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:17:05 crc kubenswrapper[4798]: E0203 00:17:05.907936 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:17:05 crc kubenswrapper[4798]: I0203 00:17:05.907985 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:17:05 crc kubenswrapper[4798]: E0203 00:17:05.908056 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:17:05 crc kubenswrapper[4798]: E0203 00:17:05.908177 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:17:06 crc kubenswrapper[4798]: I0203 00:17:06.907318 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:17:06 crc kubenswrapper[4798]: E0203 00:17:06.907524 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:17:07 crc kubenswrapper[4798]: I0203 00:17:07.907779 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:17:07 crc kubenswrapper[4798]: I0203 00:17:07.907820 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:17:07 crc kubenswrapper[4798]: E0203 00:17:07.907959 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:17:07 crc kubenswrapper[4798]: I0203 00:17:07.908014 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:17:07 crc kubenswrapper[4798]: I0203 00:17:07.909064 4798 scope.go:117] "RemoveContainer" containerID="543ce215587d0745d59fb1c3a87d4fd86d500257ac93b2b61d7cec602dd61102" Feb 03 00:17:07 crc kubenswrapper[4798]: E0203 00:17:07.909282 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gzlj4_openshift-ovn-kubernetes(b71790a2-e390-400a-a288-2a3af8467047)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" podUID="b71790a2-e390-400a-a288-2a3af8467047" Feb 03 00:17:07 crc kubenswrapper[4798]: E0203 00:17:07.909642 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:17:07 crc kubenswrapper[4798]: E0203 00:17:07.909683 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:17:08 crc kubenswrapper[4798]: E0203 00:17:08.889873 4798 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 03 00:17:08 crc kubenswrapper[4798]: I0203 00:17:08.907519 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:17:08 crc kubenswrapper[4798]: E0203 00:17:08.909257 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:17:09 crc kubenswrapper[4798]: E0203 00:17:09.051931 4798 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 03 00:17:09 crc kubenswrapper[4798]: I0203 00:17:09.907362 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:17:09 crc kubenswrapper[4798]: I0203 00:17:09.907363 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:17:09 crc kubenswrapper[4798]: I0203 00:17:09.907376 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:17:09 crc kubenswrapper[4798]: E0203 00:17:09.907550 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:17:09 crc kubenswrapper[4798]: E0203 00:17:09.907783 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:17:09 crc kubenswrapper[4798]: E0203 00:17:09.908025 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:17:10 crc kubenswrapper[4798]: I0203 00:17:10.907996 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:17:10 crc kubenswrapper[4798]: E0203 00:17:10.908212 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:17:11 crc kubenswrapper[4798]: I0203 00:17:11.907559 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:17:11 crc kubenswrapper[4798]: E0203 00:17:11.907770 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:17:11 crc kubenswrapper[4798]: I0203 00:17:11.908214 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:17:11 crc kubenswrapper[4798]: I0203 00:17:11.908363 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:17:11 crc kubenswrapper[4798]: E0203 00:17:11.909484 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:17:11 crc kubenswrapper[4798]: E0203 00:17:11.909675 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:17:12 crc kubenswrapper[4798]: I0203 00:17:12.908270 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:17:12 crc kubenswrapper[4798]: E0203 00:17:12.908477 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:17:13 crc kubenswrapper[4798]: I0203 00:17:13.907170 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:17:13 crc kubenswrapper[4798]: I0203 00:17:13.907213 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:17:13 crc kubenswrapper[4798]: I0203 00:17:13.907284 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:17:13 crc kubenswrapper[4798]: E0203 00:17:13.907374 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:17:13 crc kubenswrapper[4798]: E0203 00:17:13.907526 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:17:13 crc kubenswrapper[4798]: E0203 00:17:13.907751 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:17:14 crc kubenswrapper[4798]: E0203 00:17:14.053402 4798 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 03 00:17:14 crc kubenswrapper[4798]: I0203 00:17:14.907390 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:17:14 crc kubenswrapper[4798]: E0203 00:17:14.907610 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:17:15 crc kubenswrapper[4798]: I0203 00:17:15.907455 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:17:15 crc kubenswrapper[4798]: I0203 00:17:15.907467 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:17:15 crc kubenswrapper[4798]: E0203 00:17:15.907739 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:17:15 crc kubenswrapper[4798]: I0203 00:17:15.907790 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:17:15 crc kubenswrapper[4798]: E0203 00:17:15.907851 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:17:15 crc kubenswrapper[4798]: E0203 00:17:15.908008 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:17:16 crc kubenswrapper[4798]: I0203 00:17:16.907825 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:17:16 crc kubenswrapper[4798]: E0203 00:17:16.907997 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:17:17 crc kubenswrapper[4798]: I0203 00:17:17.907483 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:17:17 crc kubenswrapper[4798]: E0203 00:17:17.907610 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:17:17 crc kubenswrapper[4798]: I0203 00:17:17.907824 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:17:17 crc kubenswrapper[4798]: E0203 00:17:17.907885 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:17:17 crc kubenswrapper[4798]: I0203 00:17:17.907998 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:17:17 crc kubenswrapper[4798]: E0203 00:17:17.908053 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:17:17 crc kubenswrapper[4798]: I0203 00:17:17.908449 4798 scope.go:117] "RemoveContainer" containerID="f6c6b9ea2222959f244be8e386750c0d7bdd1a2f340aca554cc8b990c3907ba7" Feb 03 00:17:18 crc kubenswrapper[4798]: I0203 00:17:18.623107 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ktf4c_106da5aa-5f2e-4d32-b172-4844ad6de7f6/kube-multus/1.log" Feb 03 00:17:18 crc kubenswrapper[4798]: I0203 00:17:18.623524 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ktf4c" event={"ID":"106da5aa-5f2e-4d32-b172-4844ad6de7f6","Type":"ContainerStarted","Data":"3f41354d4d52290d36062ee17c80b961a674625ae7ecc249447b0a9b6b9b2415"} Feb 03 00:17:18 crc kubenswrapper[4798]: I0203 00:17:18.908037 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:17:18 crc kubenswrapper[4798]: E0203 00:17:18.910114 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:17:19 crc kubenswrapper[4798]: E0203 00:17:19.054618 4798 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 03 00:17:19 crc kubenswrapper[4798]: I0203 00:17:19.908246 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:17:19 crc kubenswrapper[4798]: I0203 00:17:19.908379 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:17:19 crc kubenswrapper[4798]: I0203 00:17:19.908379 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:17:19 crc kubenswrapper[4798]: E0203 00:17:19.908593 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:17:19 crc kubenswrapper[4798]: E0203 00:17:19.908777 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:17:19 crc kubenswrapper[4798]: E0203 00:17:19.909291 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:17:19 crc kubenswrapper[4798]: I0203 00:17:19.909838 4798 scope.go:117] "RemoveContainer" containerID="543ce215587d0745d59fb1c3a87d4fd86d500257ac93b2b61d7cec602dd61102" Feb 03 00:17:20 crc kubenswrapper[4798]: I0203 00:17:20.633070 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gzlj4_b71790a2-e390-400a-a288-2a3af8467047/ovnkube-controller/3.log" Feb 03 00:17:20 crc kubenswrapper[4798]: I0203 00:17:20.636298 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" event={"ID":"b71790a2-e390-400a-a288-2a3af8467047","Type":"ContainerStarted","Data":"12ce9d144d65989d5f9710264f741702fa932ce66fb41cc0b22042ea7ed76018"} Feb 03 00:17:20 crc kubenswrapper[4798]: I0203 00:17:20.636961 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:17:20 crc kubenswrapper[4798]: I0203 00:17:20.667013 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" podStartSLOduration=111.66700017 podStartE2EDuration="1m51.66700017s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:20.666538238 +0000 UTC m=+132.432528249" watchObservedRunningTime="2026-02-03 00:17:20.66700017 +0000 UTC m=+132.432990171" Feb 03 00:17:20 crc kubenswrapper[4798]: I0203 00:17:20.877844 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-hzk9m"] Feb 03 00:17:20 crc kubenswrapper[4798]: I0203 00:17:20.877952 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:17:20 crc kubenswrapper[4798]: E0203 00:17:20.878051 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:17:20 crc kubenswrapper[4798]: I0203 00:17:20.908350 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:17:20 crc kubenswrapper[4798]: E0203 00:17:20.908555 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:17:21 crc kubenswrapper[4798]: I0203 00:17:21.907586 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:17:21 crc kubenswrapper[4798]: I0203 00:17:21.907639 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:17:21 crc kubenswrapper[4798]: E0203 00:17:21.907812 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:17:21 crc kubenswrapper[4798]: E0203 00:17:21.908226 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:17:22 crc kubenswrapper[4798]: I0203 00:17:22.907458 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:17:22 crc kubenswrapper[4798]: I0203 00:17:22.907855 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:17:22 crc kubenswrapper[4798]: E0203 00:17:22.908133 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hzk9m" podUID="039e204d-4d36-471e-990f-4eb5b4a193fc" Feb 03 00:17:22 crc kubenswrapper[4798]: E0203 00:17:22.908297 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 00:17:23 crc kubenswrapper[4798]: I0203 00:17:23.907732 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:17:23 crc kubenswrapper[4798]: E0203 00:17:23.908305 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 00:17:23 crc kubenswrapper[4798]: I0203 00:17:23.907759 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:17:23 crc kubenswrapper[4798]: E0203 00:17:23.908739 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 00:17:24 crc kubenswrapper[4798]: I0203 00:17:24.907399 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:17:24 crc kubenswrapper[4798]: I0203 00:17:24.907538 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:17:24 crc kubenswrapper[4798]: I0203 00:17:24.910406 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 03 00:17:24 crc kubenswrapper[4798]: I0203 00:17:24.911236 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 03 00:17:24 crc kubenswrapper[4798]: I0203 00:17:24.911486 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 03 00:17:24 crc kubenswrapper[4798]: I0203 00:17:24.917459 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 03 00:17:25 crc kubenswrapper[4798]: I0203 00:17:25.907412 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:17:25 crc kubenswrapper[4798]: I0203 00:17:25.907432 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:17:25 crc kubenswrapper[4798]: I0203 00:17:25.909837 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 03 00:17:25 crc kubenswrapper[4798]: I0203 00:17:25.910294 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.144034 4798 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.204980 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-28wct"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.206465 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-28wct" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.208852 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tpkwn"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.209699 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tpkwn" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.213812 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-8tbx2"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.214761 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-8tbx2" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.221581 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-ghhvz"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.225529 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.227843 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-dz2lb"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.228366 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.229172 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.230930 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ghhvz" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.242011 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.257645 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.260199 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-6pxdl"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.260312 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dz2lb" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.260874 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.261450 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.261883 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.262333 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qsvlx"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.262799 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qsvlx" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.263058 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6pxdl" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.263079 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.264349 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-k8pqq"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.265639 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-sw2b4"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.266942 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff545dc5-468e-410c-aacb-2c26ef11274e-config\") pod \"apiserver-76f77b778f-28wct\" (UID: \"ff545dc5-468e-410c-aacb-2c26ef11274e\") " pod="openshift-apiserver/apiserver-76f77b778f-28wct" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.267090 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rk64\" (UniqueName: \"kubernetes.io/projected/ff545dc5-468e-410c-aacb-2c26ef11274e-kube-api-access-2rk64\") pod \"apiserver-76f77b778f-28wct\" (UID: \"ff545dc5-468e-410c-aacb-2c26ef11274e\") " pod="openshift-apiserver/apiserver-76f77b778f-28wct" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.267251 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5c3fbdb3-7b79-4103-83d4-f2d051890fec-auth-proxy-config\") pod \"machine-approver-56656f9798-ghhvz\" (UID: \"5c3fbdb3-7b79-4103-83d4-f2d051890fec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ghhvz" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.267280 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff545dc5-468e-410c-aacb-2c26ef11274e-serving-cert\") pod \"apiserver-76f77b778f-28wct\" (UID: \"ff545dc5-468e-410c-aacb-2c26ef11274e\") " pod="openshift-apiserver/apiserver-76f77b778f-28wct" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.267327 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5c3fbdb3-7b79-4103-83d4-f2d051890fec-machine-approver-tls\") pod \"machine-approver-56656f9798-ghhvz\" (UID: \"5c3fbdb3-7b79-4103-83d4-f2d051890fec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ghhvz" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.267356 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ff545dc5-468e-410c-aacb-2c26ef11274e-image-import-ca\") pod \"apiserver-76f77b778f-28wct\" (UID: \"ff545dc5-468e-410c-aacb-2c26ef11274e\") " pod="openshift-apiserver/apiserver-76f77b778f-28wct" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.267379 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff545dc5-468e-410c-aacb-2c26ef11274e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-28wct\" (UID: \"ff545dc5-468e-410c-aacb-2c26ef11274e\") " pod="openshift-apiserver/apiserver-76f77b778f-28wct" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.267398 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ff545dc5-468e-410c-aacb-2c26ef11274e-audit-dir\") pod \"apiserver-76f77b778f-28wct\" (UID: \"ff545dc5-468e-410c-aacb-2c26ef11274e\") " pod="openshift-apiserver/apiserver-76f77b778f-28wct" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.267419 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ff545dc5-468e-410c-aacb-2c26ef11274e-etcd-client\") pod \"apiserver-76f77b778f-28wct\" (UID: \"ff545dc5-468e-410c-aacb-2c26ef11274e\") " pod="openshift-apiserver/apiserver-76f77b778f-28wct" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.267447 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c3fbdb3-7b79-4103-83d4-f2d051890fec-config\") pod \"machine-approver-56656f9798-ghhvz\" (UID: \"5c3fbdb3-7b79-4103-83d4-f2d051890fec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ghhvz" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.267484 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ff545dc5-468e-410c-aacb-2c26ef11274e-etcd-serving-ca\") pod \"apiserver-76f77b778f-28wct\" (UID: \"ff545dc5-468e-410c-aacb-2c26ef11274e\") " pod="openshift-apiserver/apiserver-76f77b778f-28wct" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.267516 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ff545dc5-468e-410c-aacb-2c26ef11274e-node-pullsecrets\") pod \"apiserver-76f77b778f-28wct\" (UID: \"ff545dc5-468e-410c-aacb-2c26ef11274e\") " pod="openshift-apiserver/apiserver-76f77b778f-28wct" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.267543 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ff545dc5-468e-410c-aacb-2c26ef11274e-encryption-config\") pod \"apiserver-76f77b778f-28wct\" (UID: \"ff545dc5-468e-410c-aacb-2c26ef11274e\") " pod="openshift-apiserver/apiserver-76f77b778f-28wct" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.267568 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ff545dc5-468e-410c-aacb-2c26ef11274e-audit\") pod \"apiserver-76f77b778f-28wct\" (UID: \"ff545dc5-468e-410c-aacb-2c26ef11274e\") " pod="openshift-apiserver/apiserver-76f77b778f-28wct" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.267600 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wsgw\" (UniqueName: \"kubernetes.io/projected/5c3fbdb3-7b79-4103-83d4-f2d051890fec-kube-api-access-8wsgw\") pod \"machine-approver-56656f9798-ghhvz\" (UID: \"5c3fbdb3-7b79-4103-83d4-f2d051890fec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ghhvz" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.268546 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.268792 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.268910 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.268794 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.269114 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.269231 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.269351 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.269020 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.269467 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.269062 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.270571 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.270774 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.270834 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.271055 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-sw2b4" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.271301 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.271556 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.271687 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.271780 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.271960 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29501280-nhgw5"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.272279 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.272560 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-k8pqq" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.272945 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zgqjc"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.274375 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-r8z8z"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.274988 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-r8z8z" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.273093 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29501280-nhgw5" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.275531 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zgqjc" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.273317 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.273556 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.277909 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.279225 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.285924 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.286241 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.286405 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.286999 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7m7t5"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.287545 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jth29"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.288190 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-mqqj9"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.288591 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-mqqj9" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.289040 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jth29" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.289539 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.289691 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.289773 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.289865 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.289894 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.289955 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.290022 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.290039 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.290209 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.290402 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.311446 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.311709 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.312028 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.312181 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.312233 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.312377 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.312030 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.290610 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.311342 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.290574 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.313044 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.313556 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.313612 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.313883 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.314623 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.315173 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.315325 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.315670 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.315873 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.318643 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.319093 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.321331 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.321613 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.321639 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.322110 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.322455 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.323090 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.323249 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.323338 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.323545 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.323766 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.323836 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.323980 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.323987 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.324122 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.324309 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"serviceca" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.324539 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.325233 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.325937 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.326069 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.326193 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.326316 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.326197 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.326266 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.335629 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.337407 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"pruner-dockercfg-p7bcw" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.338875 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8gmh8"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.339485 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8gmh8" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.339962 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.339971 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.343063 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gtv4l"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.343788 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gtv4l" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.346338 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-th92f"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.346825 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-th92f" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.347214 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.347787 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.348119 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.349514 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wxw86"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.350192 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nq4gt"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.350567 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nq4gt" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.351077 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.355986 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-4c8fd"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.357263 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-nwr6q"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.357492 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-4c8fd" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.358147 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-nwr6q" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.359316 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.360039 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.361519 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.364743 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.366466 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.367677 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.368016 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.368147 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.368695 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.369620 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpqj5\" (UniqueName: \"kubernetes.io/projected/b86e799d-e230-4159-9a60-a92b5caee0fa-kube-api-access-fpqj5\") pod \"apiserver-7bbb656c7d-dz2lb\" (UID: \"b86e799d-e230-4159-9a60-a92b5caee0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dz2lb" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.369702 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0116d04c-adc4-4adc-ab03-21058672d6e8-serving-cert\") pod \"controller-manager-879f6c89f-tpkwn\" (UID: \"0116d04c-adc4-4adc-ab03-21058672d6e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tpkwn" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.369891 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b86e799d-e230-4159-9a60-a92b5caee0fa-audit-dir\") pod \"apiserver-7bbb656c7d-dz2lb\" (UID: \"b86e799d-e230-4159-9a60-a92b5caee0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dz2lb" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.369930 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5c3fbdb3-7b79-4103-83d4-f2d051890fec-machine-approver-tls\") pod \"machine-approver-56656f9798-ghhvz\" (UID: \"5c3fbdb3-7b79-4103-83d4-f2d051890fec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ghhvz" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.369952 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0116d04c-adc4-4adc-ab03-21058672d6e8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tpkwn\" (UID: \"0116d04c-adc4-4adc-ab03-21058672d6e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tpkwn" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.369971 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ff545dc5-468e-410c-aacb-2c26ef11274e-image-import-ca\") pod \"apiserver-76f77b778f-28wct\" (UID: \"ff545dc5-468e-410c-aacb-2c26ef11274e\") " pod="openshift-apiserver/apiserver-76f77b778f-28wct" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.369990 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff545dc5-468e-410c-aacb-2c26ef11274e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-28wct\" (UID: \"ff545dc5-468e-410c-aacb-2c26ef11274e\") " pod="openshift-apiserver/apiserver-76f77b778f-28wct" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.370007 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ff545dc5-468e-410c-aacb-2c26ef11274e-audit-dir\") pod \"apiserver-76f77b778f-28wct\" (UID: \"ff545dc5-468e-410c-aacb-2c26ef11274e\") " pod="openshift-apiserver/apiserver-76f77b778f-28wct" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.370025 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ff545dc5-468e-410c-aacb-2c26ef11274e-etcd-client\") pod \"apiserver-76f77b778f-28wct\" (UID: \"ff545dc5-468e-410c-aacb-2c26ef11274e\") " pod="openshift-apiserver/apiserver-76f77b778f-28wct" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.370043 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed0d46a1-22a9-46aa-b0d8-c65624861d9a-config\") pod \"machine-api-operator-5694c8668f-8tbx2\" (UID: \"ed0d46a1-22a9-46aa-b0d8-c65624861d9a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8tbx2" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.370065 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c3fbdb3-7b79-4103-83d4-f2d051890fec-config\") pod \"machine-approver-56656f9798-ghhvz\" (UID: \"5c3fbdb3-7b79-4103-83d4-f2d051890fec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ghhvz" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.370086 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b86e799d-e230-4159-9a60-a92b5caee0fa-serving-cert\") pod \"apiserver-7bbb656c7d-dz2lb\" (UID: \"b86e799d-e230-4159-9a60-a92b5caee0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dz2lb" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.370113 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ff545dc5-468e-410c-aacb-2c26ef11274e-etcd-serving-ca\") pod \"apiserver-76f77b778f-28wct\" (UID: \"ff545dc5-468e-410c-aacb-2c26ef11274e\") " pod="openshift-apiserver/apiserver-76f77b778f-28wct" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.370134 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b86e799d-e230-4159-9a60-a92b5caee0fa-encryption-config\") pod \"apiserver-7bbb656c7d-dz2lb\" (UID: \"b86e799d-e230-4159-9a60-a92b5caee0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dz2lb" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.370154 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ff545dc5-468e-410c-aacb-2c26ef11274e-node-pullsecrets\") pod \"apiserver-76f77b778f-28wct\" (UID: \"ff545dc5-468e-410c-aacb-2c26ef11274e\") " pod="openshift-apiserver/apiserver-76f77b778f-28wct" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.370183 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2q9c\" (UniqueName: \"kubernetes.io/projected/0116d04c-adc4-4adc-ab03-21058672d6e8-kube-api-access-t2q9c\") pod \"controller-manager-879f6c89f-tpkwn\" (UID: \"0116d04c-adc4-4adc-ab03-21058672d6e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tpkwn" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.370205 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b86e799d-e230-4159-9a60-a92b5caee0fa-audit-policies\") pod \"apiserver-7bbb656c7d-dz2lb\" (UID: \"b86e799d-e230-4159-9a60-a92b5caee0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dz2lb" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.370224 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0116d04c-adc4-4adc-ab03-21058672d6e8-client-ca\") pod \"controller-manager-879f6c89f-tpkwn\" (UID: \"0116d04c-adc4-4adc-ab03-21058672d6e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tpkwn" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.370242 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b86e799d-e230-4159-9a60-a92b5caee0fa-etcd-client\") pod \"apiserver-7bbb656c7d-dz2lb\" (UID: \"b86e799d-e230-4159-9a60-a92b5caee0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dz2lb" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.370259 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ff545dc5-468e-410c-aacb-2c26ef11274e-encryption-config\") pod \"apiserver-76f77b778f-28wct\" (UID: \"ff545dc5-468e-410c-aacb-2c26ef11274e\") " pod="openshift-apiserver/apiserver-76f77b778f-28wct" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.370281 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ff545dc5-468e-410c-aacb-2c26ef11274e-audit\") pod \"apiserver-76f77b778f-28wct\" (UID: \"ff545dc5-468e-410c-aacb-2c26ef11274e\") " pod="openshift-apiserver/apiserver-76f77b778f-28wct" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.370301 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ed0d46a1-22a9-46aa-b0d8-c65624861d9a-images\") pod \"machine-api-operator-5694c8668f-8tbx2\" (UID: \"ed0d46a1-22a9-46aa-b0d8-c65624861d9a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8tbx2" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.370331 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wsgw\" (UniqueName: \"kubernetes.io/projected/5c3fbdb3-7b79-4103-83d4-f2d051890fec-kube-api-access-8wsgw\") pod \"machine-approver-56656f9798-ghhvz\" (UID: \"5c3fbdb3-7b79-4103-83d4-f2d051890fec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ghhvz" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.370353 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b86e799d-e230-4159-9a60-a92b5caee0fa-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-dz2lb\" (UID: \"b86e799d-e230-4159-9a60-a92b5caee0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dz2lb" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.370373 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4446r\" (UniqueName: \"kubernetes.io/projected/ed0d46a1-22a9-46aa-b0d8-c65624861d9a-kube-api-access-4446r\") pod \"machine-api-operator-5694c8668f-8tbx2\" (UID: \"ed0d46a1-22a9-46aa-b0d8-c65624861d9a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8tbx2" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.370398 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b86e799d-e230-4159-9a60-a92b5caee0fa-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-dz2lb\" (UID: \"b86e799d-e230-4159-9a60-a92b5caee0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dz2lb" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.370416 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0116d04c-adc4-4adc-ab03-21058672d6e8-config\") pod \"controller-manager-879f6c89f-tpkwn\" (UID: \"0116d04c-adc4-4adc-ab03-21058672d6e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tpkwn" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.370438 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff545dc5-468e-410c-aacb-2c26ef11274e-config\") pod \"apiserver-76f77b778f-28wct\" (UID: \"ff545dc5-468e-410c-aacb-2c26ef11274e\") " pod="openshift-apiserver/apiserver-76f77b778f-28wct" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.370455 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rk64\" (UniqueName: \"kubernetes.io/projected/ff545dc5-468e-410c-aacb-2c26ef11274e-kube-api-access-2rk64\") pod \"apiserver-76f77b778f-28wct\" (UID: \"ff545dc5-468e-410c-aacb-2c26ef11274e\") " pod="openshift-apiserver/apiserver-76f77b778f-28wct" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.370472 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed0d46a1-22a9-46aa-b0d8-c65624861d9a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-8tbx2\" (UID: \"ed0d46a1-22a9-46aa-b0d8-c65624861d9a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8tbx2" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.370491 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5c3fbdb3-7b79-4103-83d4-f2d051890fec-auth-proxy-config\") pod \"machine-approver-56656f9798-ghhvz\" (UID: \"5c3fbdb3-7b79-4103-83d4-f2d051890fec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ghhvz" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.370514 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff545dc5-468e-410c-aacb-2c26ef11274e-serving-cert\") pod \"apiserver-76f77b778f-28wct\" (UID: \"ff545dc5-468e-410c-aacb-2c26ef11274e\") " pod="openshift-apiserver/apiserver-76f77b778f-28wct" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.377466 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ff545dc5-468e-410c-aacb-2c26ef11274e-etcd-client\") pod \"apiserver-76f77b778f-28wct\" (UID: \"ff545dc5-468e-410c-aacb-2c26ef11274e\") " pod="openshift-apiserver/apiserver-76f77b778f-28wct" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.378514 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ff545dc5-468e-410c-aacb-2c26ef11274e-audit\") pod \"apiserver-76f77b778f-28wct\" (UID: \"ff545dc5-468e-410c-aacb-2c26ef11274e\") " pod="openshift-apiserver/apiserver-76f77b778f-28wct" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.401457 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c3fbdb3-7b79-4103-83d4-f2d051890fec-config\") pod \"machine-approver-56656f9798-ghhvz\" (UID: \"5c3fbdb3-7b79-4103-83d4-f2d051890fec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ghhvz" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.403087 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff545dc5-468e-410c-aacb-2c26ef11274e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-28wct\" (UID: \"ff545dc5-468e-410c-aacb-2c26ef11274e\") " pod="openshift-apiserver/apiserver-76f77b778f-28wct" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.403611 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff545dc5-468e-410c-aacb-2c26ef11274e-config\") pod \"apiserver-76f77b778f-28wct\" (UID: \"ff545dc5-468e-410c-aacb-2c26ef11274e\") " pod="openshift-apiserver/apiserver-76f77b778f-28wct" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.403805 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ff545dc5-468e-410c-aacb-2c26ef11274e-audit-dir\") pod \"apiserver-76f77b778f-28wct\" (UID: \"ff545dc5-468e-410c-aacb-2c26ef11274e\") " pod="openshift-apiserver/apiserver-76f77b778f-28wct" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.404049 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.404194 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lzsmv"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.404310 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ff545dc5-468e-410c-aacb-2c26ef11274e-node-pullsecrets\") pod \"apiserver-76f77b778f-28wct\" (UID: \"ff545dc5-468e-410c-aacb-2c26ef11274e\") " pod="openshift-apiserver/apiserver-76f77b778f-28wct" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.404721 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5c3fbdb3-7b79-4103-83d4-f2d051890fec-auth-proxy-config\") pod \"machine-approver-56656f9798-ghhvz\" (UID: \"5c3fbdb3-7b79-4103-83d4-f2d051890fec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ghhvz" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.405107 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff545dc5-468e-410c-aacb-2c26ef11274e-serving-cert\") pod \"apiserver-76f77b778f-28wct\" (UID: \"ff545dc5-468e-410c-aacb-2c26ef11274e\") " pod="openshift-apiserver/apiserver-76f77b778f-28wct" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.406463 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-99jq8"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.406718 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ff545dc5-468e-410c-aacb-2c26ef11274e-encryption-config\") pod \"apiserver-76f77b778f-28wct\" (UID: \"ff545dc5-468e-410c-aacb-2c26ef11274e\") " pod="openshift-apiserver/apiserver-76f77b778f-28wct" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.406950 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lzsmv" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.407308 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ff545dc5-468e-410c-aacb-2c26ef11274e-image-import-ca\") pod \"apiserver-76f77b778f-28wct\" (UID: \"ff545dc5-468e-410c-aacb-2c26ef11274e\") " pod="openshift-apiserver/apiserver-76f77b778f-28wct" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.411734 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ff545dc5-468e-410c-aacb-2c26ef11274e-etcd-serving-ca\") pod \"apiserver-76f77b778f-28wct\" (UID: \"ff545dc5-468e-410c-aacb-2c26ef11274e\") " pod="openshift-apiserver/apiserver-76f77b778f-28wct" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.412313 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5c3fbdb3-7b79-4103-83d4-f2d051890fec-machine-approver-tls\") pod \"machine-approver-56656f9798-ghhvz\" (UID: \"5c3fbdb3-7b79-4103-83d4-f2d051890fec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ghhvz" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.413362 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8p8fg"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.413640 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-99jq8" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.414348 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8p8fg" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.424020 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7hf94"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.425260 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7hf94" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.427495 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7hhdh"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.430255 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7hhdh" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.431209 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.431538 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.432730 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t2s9r"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.436697 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tpkwn"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.437254 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t2s9r" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.440288 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.443378 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.444842 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bwvjw"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.446328 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bwvjw" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.448993 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mntxm"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.450220 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mntxm" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.455165 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.456994 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qdl4h"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.460016 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdl4h" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.460466 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9xz4k"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.460883 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9xz4k" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.462363 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lmn6t"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.463141 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-8tbx2"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.463269 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-lmn6t" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.464963 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4ptdj"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.465490 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4ptdj" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.466960 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-lk9zk"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.467752 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p5f22"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.467915 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lk9zk" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.468221 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p5f22" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.469047 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9vrvn"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.469996 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501295-v5q6w"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.470407 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501295-v5q6w" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.471215 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.471632 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9v9rv"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.471691 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-9vrvn" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.472891 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9v9rv" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.472758 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-28wct"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.473671 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-ckdgn"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.474396 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-ckdgn" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.474684 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-6pxdl"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.475972 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29501280-nhgw5"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.477499 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-dz2lb"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.479694 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zgqjc"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.480777 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t2s9r"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.481353 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed376d37-629a-48fd-81f9-218864e9b711-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qsvlx\" (UID: \"ed376d37-629a-48fd-81f9-218864e9b711\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qsvlx" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.481408 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f-console-oauth-config\") pod \"console-f9d7485db-mqqj9\" (UID: \"dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f\") " pod="openshift-console/console-f9d7485db-mqqj9" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.481446 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpqj5\" (UniqueName: \"kubernetes.io/projected/b86e799d-e230-4159-9a60-a92b5caee0fa-kube-api-access-fpqj5\") pod \"apiserver-7bbb656c7d-dz2lb\" (UID: \"b86e799d-e230-4159-9a60-a92b5caee0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dz2lb" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.481963 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qsvlx"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.482057 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0116d04c-adc4-4adc-ab03-21058672d6e8-serving-cert\") pod \"controller-manager-879f6c89f-tpkwn\" (UID: \"0116d04c-adc4-4adc-ab03-21058672d6e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tpkwn" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.482089 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrc65\" (UniqueName: \"kubernetes.io/projected/ed376d37-629a-48fd-81f9-218864e9b711-kube-api-access-qrc65\") pod \"openshift-apiserver-operator-796bbdcf4f-qsvlx\" (UID: \"ed376d37-629a-48fd-81f9-218864e9b711\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qsvlx" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.482393 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b86e799d-e230-4159-9a60-a92b5caee0fa-audit-dir\") pod \"apiserver-7bbb656c7d-dz2lb\" (UID: \"b86e799d-e230-4159-9a60-a92b5caee0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dz2lb" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.484537 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-sw2b4"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.484628 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7m7t5"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.485333 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0116d04c-adc4-4adc-ab03-21058672d6e8-serving-cert\") pod \"controller-manager-879f6c89f-tpkwn\" (UID: \"0116d04c-adc4-4adc-ab03-21058672d6e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tpkwn" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.485554 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b86e799d-e230-4159-9a60-a92b5caee0fa-audit-dir\") pod \"apiserver-7bbb656c7d-dz2lb\" (UID: \"b86e799d-e230-4159-9a60-a92b5caee0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dz2lb" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.485615 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0116d04c-adc4-4adc-ab03-21058672d6e8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tpkwn\" (UID: \"0116d04c-adc4-4adc-ab03-21058672d6e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tpkwn" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.485645 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f-trusted-ca-bundle\") pod \"console-f9d7485db-mqqj9\" (UID: \"dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f\") " pod="openshift-console/console-f9d7485db-mqqj9" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.485689 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed0d46a1-22a9-46aa-b0d8-c65624861d9a-config\") pod \"machine-api-operator-5694c8668f-8tbx2\" (UID: \"ed0d46a1-22a9-46aa-b0d8-c65624861d9a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8tbx2" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.485714 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b86e799d-e230-4159-9a60-a92b5caee0fa-serving-cert\") pod \"apiserver-7bbb656c7d-dz2lb\" (UID: \"b86e799d-e230-4159-9a60-a92b5caee0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dz2lb" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.485744 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f-console-config\") pod \"console-f9d7485db-mqqj9\" (UID: \"dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f\") " pod="openshift-console/console-f9d7485db-mqqj9" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.485769 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f-console-serving-cert\") pod \"console-f9d7485db-mqqj9\" (UID: \"dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f\") " pod="openshift-console/console-f9d7485db-mqqj9" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.485786 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zd2v\" (UniqueName: \"kubernetes.io/projected/dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f-kube-api-access-5zd2v\") pod \"console-f9d7485db-mqqj9\" (UID: \"dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f\") " pod="openshift-console/console-f9d7485db-mqqj9" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.485809 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b86e799d-e230-4159-9a60-a92b5caee0fa-encryption-config\") pod \"apiserver-7bbb656c7d-dz2lb\" (UID: \"b86e799d-e230-4159-9a60-a92b5caee0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dz2lb" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.485842 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b86e799d-e230-4159-9a60-a92b5caee0fa-audit-policies\") pod \"apiserver-7bbb656c7d-dz2lb\" (UID: \"b86e799d-e230-4159-9a60-a92b5caee0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dz2lb" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.485861 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0116d04c-adc4-4adc-ab03-21058672d6e8-client-ca\") pod \"controller-manager-879f6c89f-tpkwn\" (UID: \"0116d04c-adc4-4adc-ab03-21058672d6e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tpkwn" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.485879 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2q9c\" (UniqueName: \"kubernetes.io/projected/0116d04c-adc4-4adc-ab03-21058672d6e8-kube-api-access-t2q9c\") pod \"controller-manager-879f6c89f-tpkwn\" (UID: \"0116d04c-adc4-4adc-ab03-21058672d6e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tpkwn" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.485902 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b86e799d-e230-4159-9a60-a92b5caee0fa-etcd-client\") pod \"apiserver-7bbb656c7d-dz2lb\" (UID: \"b86e799d-e230-4159-9a60-a92b5caee0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dz2lb" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.485931 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ed0d46a1-22a9-46aa-b0d8-c65624861d9a-images\") pod \"machine-api-operator-5694c8668f-8tbx2\" (UID: \"ed0d46a1-22a9-46aa-b0d8-c65624861d9a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8tbx2" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.485961 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f-service-ca\") pod \"console-f9d7485db-mqqj9\" (UID: \"dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f\") " pod="openshift-console/console-f9d7485db-mqqj9" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.485995 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b86e799d-e230-4159-9a60-a92b5caee0fa-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-dz2lb\" (UID: \"b86e799d-e230-4159-9a60-a92b5caee0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dz2lb" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.486025 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4446r\" (UniqueName: \"kubernetes.io/projected/ed0d46a1-22a9-46aa-b0d8-c65624861d9a-kube-api-access-4446r\") pod \"machine-api-operator-5694c8668f-8tbx2\" (UID: \"ed0d46a1-22a9-46aa-b0d8-c65624861d9a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8tbx2" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.486047 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b86e799d-e230-4159-9a60-a92b5caee0fa-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-dz2lb\" (UID: \"b86e799d-e230-4159-9a60-a92b5caee0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dz2lb" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.486064 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0116d04c-adc4-4adc-ab03-21058672d6e8-config\") pod \"controller-manager-879f6c89f-tpkwn\" (UID: \"0116d04c-adc4-4adc-ab03-21058672d6e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tpkwn" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.486096 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed376d37-629a-48fd-81f9-218864e9b711-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qsvlx\" (UID: \"ed376d37-629a-48fd-81f9-218864e9b711\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qsvlx" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.486116 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f-oauth-serving-cert\") pod \"console-f9d7485db-mqqj9\" (UID: \"dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f\") " pod="openshift-console/console-f9d7485db-mqqj9" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.486148 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed0d46a1-22a9-46aa-b0d8-c65624861d9a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-8tbx2\" (UID: \"ed0d46a1-22a9-46aa-b0d8-c65624861d9a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8tbx2" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.487933 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0116d04c-adc4-4adc-ab03-21058672d6e8-client-ca\") pod \"controller-manager-879f6c89f-tpkwn\" (UID: \"0116d04c-adc4-4adc-ab03-21058672d6e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tpkwn" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.489321 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0116d04c-adc4-4adc-ab03-21058672d6e8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tpkwn\" (UID: \"0116d04c-adc4-4adc-ab03-21058672d6e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tpkwn" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.490128 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed0d46a1-22a9-46aa-b0d8-c65624861d9a-config\") pod \"machine-api-operator-5694c8668f-8tbx2\" (UID: \"ed0d46a1-22a9-46aa-b0d8-c65624861d9a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8tbx2" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.491317 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b86e799d-e230-4159-9a60-a92b5caee0fa-etcd-client\") pod \"apiserver-7bbb656c7d-dz2lb\" (UID: \"b86e799d-e230-4159-9a60-a92b5caee0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dz2lb" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.492113 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.492336 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed0d46a1-22a9-46aa-b0d8-c65624861d9a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-8tbx2\" (UID: \"ed0d46a1-22a9-46aa-b0d8-c65624861d9a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8tbx2" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.492503 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ed0d46a1-22a9-46aa-b0d8-c65624861d9a-images\") pod \"machine-api-operator-5694c8668f-8tbx2\" (UID: \"ed0d46a1-22a9-46aa-b0d8-c65624861d9a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8tbx2" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.492561 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7hhdh"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.492590 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nq4gt"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.493242 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b86e799d-e230-4159-9a60-a92b5caee0fa-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-dz2lb\" (UID: \"b86e799d-e230-4159-9a60-a92b5caee0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dz2lb" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.493476 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-99jq8"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.493793 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b86e799d-e230-4159-9a60-a92b5caee0fa-serving-cert\") pod \"apiserver-7bbb656c7d-dz2lb\" (UID: \"b86e799d-e230-4159-9a60-a92b5caee0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dz2lb" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.493981 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b86e799d-e230-4159-9a60-a92b5caee0fa-audit-policies\") pod \"apiserver-7bbb656c7d-dz2lb\" (UID: \"b86e799d-e230-4159-9a60-a92b5caee0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dz2lb" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.494002 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0116d04c-adc4-4adc-ab03-21058672d6e8-config\") pod \"controller-manager-879f6c89f-tpkwn\" (UID: \"0116d04c-adc4-4adc-ab03-21058672d6e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tpkwn" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.494584 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b86e799d-e230-4159-9a60-a92b5caee0fa-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-dz2lb\" (UID: \"b86e799d-e230-4159-9a60-a92b5caee0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dz2lb" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.496483 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-k8pqq"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.496681 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b86e799d-e230-4159-9a60-a92b5caee0fa-encryption-config\") pod \"apiserver-7bbb656c7d-dz2lb\" (UID: \"b86e799d-e230-4159-9a60-a92b5caee0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dz2lb" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.499826 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lzsmv"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.503968 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rsdzb"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.505387 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-rsdzb" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.508223 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-jxs7f"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.508974 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-jxs7f" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.510195 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.512165 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-th92f"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.514184 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mntxm"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.515409 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-ckdgn"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.517147 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wxw86"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.519733 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bwvjw"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.520763 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-r8z8z"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.522091 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8gmh8"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.523563 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9xz4k"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.525403 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jth29"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.527287 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-mqqj9"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.528977 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.538810 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-lk9zk"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.540483 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gtv4l"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.541748 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-nwr6q"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.542870 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8p8fg"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.543990 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9vrvn"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.545159 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7hf94"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.546374 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4ptdj"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.547747 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9v9rv"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.549491 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.549534 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qdl4h"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.550739 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lmn6t"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.551831 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501295-v5q6w"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.553073 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rsdzb"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.554580 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p5f22"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.556067 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-5wrch"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.556950 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5wrch" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.557582 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5wrch"] Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.568977 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.586891 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f-trusted-ca-bundle\") pod \"console-f9d7485db-mqqj9\" (UID: \"dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f\") " pod="openshift-console/console-f9d7485db-mqqj9" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.587035 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f-console-config\") pod \"console-f9d7485db-mqqj9\" (UID: \"dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f\") " pod="openshift-console/console-f9d7485db-mqqj9" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.587126 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zd2v\" (UniqueName: \"kubernetes.io/projected/dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f-kube-api-access-5zd2v\") pod \"console-f9d7485db-mqqj9\" (UID: \"dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f\") " pod="openshift-console/console-f9d7485db-mqqj9" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.587217 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f-console-serving-cert\") pod \"console-f9d7485db-mqqj9\" (UID: \"dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f\") " pod="openshift-console/console-f9d7485db-mqqj9" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.587329 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f-service-ca\") pod \"console-f9d7485db-mqqj9\" (UID: \"dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f\") " pod="openshift-console/console-f9d7485db-mqqj9" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.587446 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f-oauth-serving-cert\") pod \"console-f9d7485db-mqqj9\" (UID: \"dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f\") " pod="openshift-console/console-f9d7485db-mqqj9" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.587523 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed376d37-629a-48fd-81f9-218864e9b711-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qsvlx\" (UID: \"ed376d37-629a-48fd-81f9-218864e9b711\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qsvlx" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.587622 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed376d37-629a-48fd-81f9-218864e9b711-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qsvlx\" (UID: \"ed376d37-629a-48fd-81f9-218864e9b711\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qsvlx" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.587762 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f-console-oauth-config\") pod \"console-f9d7485db-mqqj9\" (UID: \"dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f\") " pod="openshift-console/console-f9d7485db-mqqj9" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.587880 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrc65\" (UniqueName: \"kubernetes.io/projected/ed376d37-629a-48fd-81f9-218864e9b711-kube-api-access-qrc65\") pod \"openshift-apiserver-operator-796bbdcf4f-qsvlx\" (UID: \"ed376d37-629a-48fd-81f9-218864e9b711\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qsvlx" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.588766 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed376d37-629a-48fd-81f9-218864e9b711-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qsvlx\" (UID: \"ed376d37-629a-48fd-81f9-218864e9b711\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qsvlx" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.588777 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f-service-ca\") pod \"console-f9d7485db-mqqj9\" (UID: \"dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f\") " pod="openshift-console/console-f9d7485db-mqqj9" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.589006 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.589059 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f-oauth-serving-cert\") pod \"console-f9d7485db-mqqj9\" (UID: \"dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f\") " pod="openshift-console/console-f9d7485db-mqqj9" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.589170 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f-console-config\") pod \"console-f9d7485db-mqqj9\" (UID: \"dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f\") " pod="openshift-console/console-f9d7485db-mqqj9" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.589585 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f-trusted-ca-bundle\") pod \"console-f9d7485db-mqqj9\" (UID: \"dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f\") " pod="openshift-console/console-f9d7485db-mqqj9" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.591715 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f-console-serving-cert\") pod \"console-f9d7485db-mqqj9\" (UID: \"dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f\") " pod="openshift-console/console-f9d7485db-mqqj9" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.592995 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f-console-oauth-config\") pod \"console-f9d7485db-mqqj9\" (UID: \"dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f\") " pod="openshift-console/console-f9d7485db-mqqj9" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.593853 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed376d37-629a-48fd-81f9-218864e9b711-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qsvlx\" (UID: \"ed376d37-629a-48fd-81f9-218864e9b711\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qsvlx" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.609619 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.629358 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.649365 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.669771 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.690001 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.709603 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.729391 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.749232 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.769091 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.789400 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.809731 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.830186 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.850252 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.868557 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.890837 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.957609 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rk64\" (UniqueName: \"kubernetes.io/projected/ff545dc5-468e-410c-aacb-2c26ef11274e-kube-api-access-2rk64\") pod \"apiserver-76f77b778f-28wct\" (UID: \"ff545dc5-468e-410c-aacb-2c26ef11274e\") " pod="openshift-apiserver/apiserver-76f77b778f-28wct" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.978721 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wsgw\" (UniqueName: \"kubernetes.io/projected/5c3fbdb3-7b79-4103-83d4-f2d051890fec-kube-api-access-8wsgw\") pod \"machine-approver-56656f9798-ghhvz\" (UID: \"5c3fbdb3-7b79-4103-83d4-f2d051890fec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ghhvz" Feb 03 00:17:27 crc kubenswrapper[4798]: I0203 00:17:27.993294 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.009912 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.030084 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.050608 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.071366 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.089844 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.110202 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.130432 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.131554 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.152868 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-28wct" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.162839 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.169463 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.190477 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.209727 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.230182 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.250196 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.258010 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ghhvz" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.271322 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.304859 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.311024 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.330492 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.350693 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.370231 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.389761 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.409550 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.431322 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.433872 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-28wct"] Feb 03 00:17:28 crc kubenswrapper[4798]: W0203 00:17:28.444911 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff545dc5_468e_410c_aacb_2c26ef11274e.slice/crio-7490a24e242a696b225d995aed40d134ac15cc483c47c77fb9e8dadfc8533c82 WatchSource:0}: Error finding container 7490a24e242a696b225d995aed40d134ac15cc483c47c77fb9e8dadfc8533c82: Status 404 returned error can't find the container with id 7490a24e242a696b225d995aed40d134ac15cc483c47c77fb9e8dadfc8533c82 Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.449992 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.468264 4798 request.go:700] Waited for 1.019268004s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-controller-manager-operator-config&limit=500&resourceVersion=0 Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.469629 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.490072 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.509256 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.530094 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.549501 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.569710 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.588925 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.610185 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.630557 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.650923 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.669046 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.679102 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-28wct" event={"ID":"ff545dc5-468e-410c-aacb-2c26ef11274e","Type":"ContainerStarted","Data":"7490a24e242a696b225d995aed40d134ac15cc483c47c77fb9e8dadfc8533c82"} Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.680781 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ghhvz" event={"ID":"5c3fbdb3-7b79-4103-83d4-f2d051890fec","Type":"ContainerStarted","Data":"ec758a209db5291bee06aafa83c7ae5e896df8c7fe00b22476184e24de5e6242"} Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.680820 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ghhvz" event={"ID":"5c3fbdb3-7b79-4103-83d4-f2d051890fec","Type":"ContainerStarted","Data":"f093d7569e96776e14c985d1939576a582e69c53e228c2644db64f2829aecc6b"} Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.689509 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.717069 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.729112 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.750111 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.769316 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.789824 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.810009 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.830105 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.850426 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.869323 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.890017 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.909626 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.935559 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.951254 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.970239 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 03 00:17:28 crc kubenswrapper[4798]: I0203 00:17:28.990110 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.010444 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.030762 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.049347 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.074179 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.089958 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.109970 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.130486 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.150409 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.200611 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpqj5\" (UniqueName: \"kubernetes.io/projected/b86e799d-e230-4159-9a60-a92b5caee0fa-kube-api-access-fpqj5\") pod \"apiserver-7bbb656c7d-dz2lb\" (UID: \"b86e799d-e230-4159-9a60-a92b5caee0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dz2lb" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.216814 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2q9c\" (UniqueName: \"kubernetes.io/projected/0116d04c-adc4-4adc-ab03-21058672d6e8-kube-api-access-t2q9c\") pod \"controller-manager-879f6c89f-tpkwn\" (UID: \"0116d04c-adc4-4adc-ab03-21058672d6e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tpkwn" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.230213 4798 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.234467 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4446r\" (UniqueName: \"kubernetes.io/projected/ed0d46a1-22a9-46aa-b0d8-c65624861d9a-kube-api-access-4446r\") pod \"machine-api-operator-5694c8668f-8tbx2\" (UID: \"ed0d46a1-22a9-46aa-b0d8-c65624861d9a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8tbx2" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.250769 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.270007 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.290141 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.310453 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.330965 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.350384 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.365118 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tpkwn" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.369356 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.390111 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-8tbx2" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.390940 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.410528 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.449803 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dz2lb" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.451201 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zd2v\" (UniqueName: \"kubernetes.io/projected/dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f-kube-api-access-5zd2v\") pod \"console-f9d7485db-mqqj9\" (UID: \"dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f\") " pod="openshift-console/console-f9d7485db-mqqj9" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.468283 4798 request.go:700] Waited for 1.557189361s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/persistentvolumes/pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.476717 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrc65\" (UniqueName: \"kubernetes.io/projected/ed376d37-629a-48fd-81f9-218864e9b711-kube-api-access-qrc65\") pod \"openshift-apiserver-operator-796bbdcf4f-qsvlx\" (UID: \"ed376d37-629a-48fd-81f9-218864e9b711\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qsvlx" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.562104 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7m7t5\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.562435 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nndhl\" (UniqueName: \"kubernetes.io/projected/3fa10d48-6402-4828-a366-1a9ea825a444-kube-api-access-nndhl\") pod \"openshift-config-operator-7777fb866f-jth29\" (UID: \"3fa10d48-6402-4828-a366-1a9ea825a444\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jth29" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.562480 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/691167fd-4218-4be3-bd41-39486e614ab4-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.562543 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a963b867-b963-4b9e-abe9-47088ff98bea-default-certificate\") pod \"router-default-5444994796-4c8fd\" (UID: \"a963b867-b963-4b9e-abe9-47088ff98bea\") " pod="openshift-ingress/router-default-5444994796-4c8fd" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.562589 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7m7t5\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.562611 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7m7t5\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.562740 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ddfa4269-a259-466c-b0fd-be3ae32849ee-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gtv4l\" (UID: \"ddfa4269-a259-466c-b0fd-be3ae32849ee\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gtv4l" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.562767 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11dc6c8b-70db-4d88-8792-a4a100330c8a-serving-cert\") pod \"authentication-operator-69f744f599-sw2b4\" (UID: \"11dc6c8b-70db-4d88-8792-a4a100330c8a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sw2b4" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.562789 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cxg8\" (UniqueName: \"kubernetes.io/projected/11dc6c8b-70db-4d88-8792-a4a100330c8a-kube-api-access-2cxg8\") pod \"authentication-operator-69f744f599-sw2b4\" (UID: \"11dc6c8b-70db-4d88-8792-a4a100330c8a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sw2b4" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.562831 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmbbx\" (UniqueName: \"kubernetes.io/projected/46425fd6-9499-4e9a-8450-3fafbe2c6611-kube-api-access-fmbbx\") pod \"route-controller-manager-6576b87f9c-6pxdl\" (UID: \"46425fd6-9499-4e9a-8450-3fafbe2c6611\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6pxdl" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.562857 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/691167fd-4218-4be3-bd41-39486e614ab4-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.562880 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddfa4269-a259-466c-b0fd-be3ae32849ee-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gtv4l\" (UID: \"ddfa4269-a259-466c-b0fd-be3ae32849ee\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gtv4l" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.562900 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83bfe431-00e7-4ce9-a648-6e5634edfd3f-serving-cert\") pod \"console-operator-58897d9998-r8z8z\" (UID: \"83bfe431-00e7-4ce9-a648-6e5634edfd3f\") " pod="openshift-console-operator/console-operator-58897d9998-r8z8z" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.562923 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzcls\" (UniqueName: \"kubernetes.io/projected/691167fd-4218-4be3-bd41-39486e614ab4-kube-api-access-dzcls\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.562944 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-audit-policies\") pod \"oauth-openshift-558db77b4-7m7t5\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.562964 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ef7f31fe-b09b-4388-a076-1a287c202292-serviceca\") pod \"image-pruner-29501280-nhgw5\" (UID: \"ef7f31fe-b09b-4388-a076-1a287c202292\") " pod="openshift-image-registry/image-pruner-29501280-nhgw5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.562984 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0dc408c-7d0c-4c18-a500-bcffb273b616-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-nq4gt\" (UID: \"e0dc408c-7d0c-4c18-a500-bcffb273b616\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nq4gt" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.563005 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11dc6c8b-70db-4d88-8792-a4a100330c8a-config\") pod \"authentication-operator-69f744f599-sw2b4\" (UID: \"11dc6c8b-70db-4d88-8792-a4a100330c8a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sw2b4" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.563025 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11dc6c8b-70db-4d88-8792-a4a100330c8a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-sw2b4\" (UID: \"11dc6c8b-70db-4d88-8792-a4a100330c8a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sw2b4" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.563062 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/841ea4a0-8ab8-4a72-96a8-40578497e9c4-serving-cert\") pod \"etcd-operator-b45778765-th92f\" (UID: \"841ea4a0-8ab8-4a72-96a8-40578497e9c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-th92f" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.563084 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ad9a195-3fae-4ba2-a2c6-577888216124-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8gmh8\" (UID: \"8ad9a195-3fae-4ba2-a2c6-577888216124\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8gmh8" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.563107 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46425fd6-9499-4e9a-8450-3fafbe2c6611-config\") pod \"route-controller-manager-6576b87f9c-6pxdl\" (UID: \"46425fd6-9499-4e9a-8450-3fafbe2c6611\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6pxdl" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.563130 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-audit-dir\") pod \"oauth-openshift-558db77b4-7m7t5\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.563166 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0dc408c-7d0c-4c18-a500-bcffb273b616-config\") pod \"kube-apiserver-operator-766d6c64bb-nq4gt\" (UID: \"e0dc408c-7d0c-4c18-a500-bcffb273b616\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nq4gt" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.563187 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83bfe431-00e7-4ce9-a648-6e5634edfd3f-config\") pod \"console-operator-58897d9998-r8z8z\" (UID: \"83bfe431-00e7-4ce9-a648-6e5634edfd3f\") " pod="openshift-console-operator/console-operator-58897d9998-r8z8z" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.563218 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7m7t5\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.563239 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/691167fd-4218-4be3-bd41-39486e614ab4-bound-sa-token\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.563260 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r96dl\" (UniqueName: \"kubernetes.io/projected/e80fb785-eead-4ebd-9e0c-0f5c548c257c-kube-api-access-r96dl\") pod \"downloads-7954f5f757-k8pqq\" (UID: \"e80fb785-eead-4ebd-9e0c-0f5c548c257c\") " pod="openshift-console/downloads-7954f5f757-k8pqq" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.563282 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ddfa4269-a259-466c-b0fd-be3ae32849ee-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gtv4l\" (UID: \"ddfa4269-a259-466c-b0fd-be3ae32849ee\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gtv4l" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.563304 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46425fd6-9499-4e9a-8450-3fafbe2c6611-client-ca\") pod \"route-controller-manager-6576b87f9c-6pxdl\" (UID: \"46425fd6-9499-4e9a-8450-3fafbe2c6611\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6pxdl" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.563325 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7m7t5\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.563358 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/841ea4a0-8ab8-4a72-96a8-40578497e9c4-etcd-service-ca\") pod \"etcd-operator-b45778765-th92f\" (UID: \"841ea4a0-8ab8-4a72-96a8-40578497e9c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-th92f" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.563404 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbpdl\" (UniqueName: \"kubernetes.io/projected/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-kube-api-access-hbpdl\") pod \"oauth-openshift-558db77b4-7m7t5\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.563442 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.563465 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/841ea4a0-8ab8-4a72-96a8-40578497e9c4-etcd-ca\") pod \"etcd-operator-b45778765-th92f\" (UID: \"841ea4a0-8ab8-4a72-96a8-40578497e9c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-th92f" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.563501 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/691167fd-4218-4be3-bd41-39486e614ab4-registry-certificates\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.563521 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11dc6c8b-70db-4d88-8792-a4a100330c8a-service-ca-bundle\") pod \"authentication-operator-69f744f599-sw2b4\" (UID: \"11dc6c8b-70db-4d88-8792-a4a100330c8a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sw2b4" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.563546 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7m7t5\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.563567 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3fa10d48-6402-4828-a366-1a9ea825a444-serving-cert\") pod \"openshift-config-operator-7777fb866f-jth29\" (UID: \"3fa10d48-6402-4828-a366-1a9ea825a444\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jth29" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.563591 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7m7t5\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.563613 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7m7t5\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.563637 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8ad9a195-3fae-4ba2-a2c6-577888216124-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8gmh8\" (UID: \"8ad9a195-3fae-4ba2-a2c6-577888216124\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8gmh8" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.563694 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7m7t5\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.563737 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7m7t5\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.563762 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xdk6\" (UniqueName: \"kubernetes.io/projected/47d61fbc-8622-45c3-a0fd-b52320050c34-kube-api-access-9xdk6\") pod \"cluster-samples-operator-665b6dd947-zgqjc\" (UID: \"47d61fbc-8622-45c3-a0fd-b52320050c34\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zgqjc" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.563784 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m69v\" (UniqueName: \"kubernetes.io/projected/b35482f6-84f0-4bbc-8bd4-2787b5912589-kube-api-access-5m69v\") pod \"dns-operator-744455d44c-nwr6q\" (UID: \"b35482f6-84f0-4bbc-8bd4-2787b5912589\") " pod="openshift-dns-operator/dns-operator-744455d44c-nwr6q" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.563832 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0dc408c-7d0c-4c18-a500-bcffb273b616-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-nq4gt\" (UID: \"e0dc408c-7d0c-4c18-a500-bcffb273b616\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nq4gt" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.563853 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd8vc\" (UniqueName: \"kubernetes.io/projected/ef7f31fe-b09b-4388-a076-1a287c202292-kube-api-access-fd8vc\") pod \"image-pruner-29501280-nhgw5\" (UID: \"ef7f31fe-b09b-4388-a076-1a287c202292\") " pod="openshift-image-registry/image-pruner-29501280-nhgw5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.563876 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a963b867-b963-4b9e-abe9-47088ff98bea-stats-auth\") pod \"router-default-5444994796-4c8fd\" (UID: \"a963b867-b963-4b9e-abe9-47088ff98bea\") " pod="openshift-ingress/router-default-5444994796-4c8fd" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.563910 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx9v9\" (UniqueName: \"kubernetes.io/projected/a963b867-b963-4b9e-abe9-47088ff98bea-kube-api-access-gx9v9\") pod \"router-default-5444994796-4c8fd\" (UID: \"a963b867-b963-4b9e-abe9-47088ff98bea\") " pod="openshift-ingress/router-default-5444994796-4c8fd" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.563931 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/841ea4a0-8ab8-4a72-96a8-40578497e9c4-config\") pod \"etcd-operator-b45778765-th92f\" (UID: \"841ea4a0-8ab8-4a72-96a8-40578497e9c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-th92f" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.563953 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvk7w\" (UniqueName: \"kubernetes.io/projected/8ad9a195-3fae-4ba2-a2c6-577888216124-kube-api-access-wvk7w\") pod \"cluster-image-registry-operator-dc59b4c8b-8gmh8\" (UID: \"8ad9a195-3fae-4ba2-a2c6-577888216124\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8gmh8" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.563985 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3fa10d48-6402-4828-a366-1a9ea825a444-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jth29\" (UID: \"3fa10d48-6402-4828-a366-1a9ea825a444\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jth29" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.564019 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7m7t5\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.564041 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a963b867-b963-4b9e-abe9-47088ff98bea-metrics-certs\") pod \"router-default-5444994796-4c8fd\" (UID: \"a963b867-b963-4b9e-abe9-47088ff98bea\") " pod="openshift-ingress/router-default-5444994796-4c8fd" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.564073 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/841ea4a0-8ab8-4a72-96a8-40578497e9c4-etcd-client\") pod \"etcd-operator-b45778765-th92f\" (UID: \"841ea4a0-8ab8-4a72-96a8-40578497e9c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-th92f" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.564093 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ad9a195-3fae-4ba2-a2c6-577888216124-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8gmh8\" (UID: \"8ad9a195-3fae-4ba2-a2c6-577888216124\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8gmh8" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.564114 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46425fd6-9499-4e9a-8450-3fafbe2c6611-serving-cert\") pod \"route-controller-manager-6576b87f9c-6pxdl\" (UID: \"46425fd6-9499-4e9a-8450-3fafbe2c6611\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6pxdl" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.564144 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b35482f6-84f0-4bbc-8bd4-2787b5912589-metrics-tls\") pod \"dns-operator-744455d44c-nwr6q\" (UID: \"b35482f6-84f0-4bbc-8bd4-2787b5912589\") " pod="openshift-dns-operator/dns-operator-744455d44c-nwr6q" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.564165 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a963b867-b963-4b9e-abe9-47088ff98bea-service-ca-bundle\") pod \"router-default-5444994796-4c8fd\" (UID: \"a963b867-b963-4b9e-abe9-47088ff98bea\") " pod="openshift-ingress/router-default-5444994796-4c8fd" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.564185 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj6vc\" (UniqueName: \"kubernetes.io/projected/83bfe431-00e7-4ce9-a648-6e5634edfd3f-kube-api-access-hj6vc\") pod \"console-operator-58897d9998-r8z8z\" (UID: \"83bfe431-00e7-4ce9-a648-6e5634edfd3f\") " pod="openshift-console-operator/console-operator-58897d9998-r8z8z" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.564232 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/691167fd-4218-4be3-bd41-39486e614ab4-trusted-ca\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.564252 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83bfe431-00e7-4ce9-a648-6e5634edfd3f-trusted-ca\") pod \"console-operator-58897d9998-r8z8z\" (UID: \"83bfe431-00e7-4ce9-a648-6e5634edfd3f\") " pod="openshift-console-operator/console-operator-58897d9998-r8z8z" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.564275 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/47d61fbc-8622-45c3-a0fd-b52320050c34-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-zgqjc\" (UID: \"47d61fbc-8622-45c3-a0fd-b52320050c34\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zgqjc" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.564296 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vx4h\" (UniqueName: \"kubernetes.io/projected/841ea4a0-8ab8-4a72-96a8-40578497e9c4-kube-api-access-6vx4h\") pod \"etcd-operator-b45778765-th92f\" (UID: \"841ea4a0-8ab8-4a72-96a8-40578497e9c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-th92f" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.564319 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/691167fd-4218-4be3-bd41-39486e614ab4-registry-tls\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:29 crc kubenswrapper[4798]: E0203 00:17:29.567809 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 00:17:30.067786747 +0000 UTC m=+141.833776988 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxw86" (UID: "691167fd-4218-4be3-bd41-39486e614ab4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.622092 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-mqqj9" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.664994 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.665073 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj6vc\" (UniqueName: \"kubernetes.io/projected/83bfe431-00e7-4ce9-a648-6e5634edfd3f-kube-api-access-hj6vc\") pod \"console-operator-58897d9998-r8z8z\" (UID: \"83bfe431-00e7-4ce9-a648-6e5634edfd3f\") " pod="openshift-console-operator/console-operator-58897d9998-r8z8z" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.665097 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a963b867-b963-4b9e-abe9-47088ff98bea-service-ca-bundle\") pod \"router-default-5444994796-4c8fd\" (UID: \"a963b867-b963-4b9e-abe9-47088ff98bea\") " pod="openshift-ingress/router-default-5444994796-4c8fd" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.665114 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/691167fd-4218-4be3-bd41-39486e614ab4-trusted-ca\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.665129 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/47d61fbc-8622-45c3-a0fd-b52320050c34-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-zgqjc\" (UID: \"47d61fbc-8622-45c3-a0fd-b52320050c34\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zgqjc" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.665147 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vx4h\" (UniqueName: \"kubernetes.io/projected/841ea4a0-8ab8-4a72-96a8-40578497e9c4-kube-api-access-6vx4h\") pod \"etcd-operator-b45778765-th92f\" (UID: \"841ea4a0-8ab8-4a72-96a8-40578497e9c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-th92f" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.665167 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7m7t5\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.665183 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nndhl\" (UniqueName: \"kubernetes.io/projected/3fa10d48-6402-4828-a366-1a9ea825a444-kube-api-access-nndhl\") pod \"openshift-config-operator-7777fb866f-jth29\" (UID: \"3fa10d48-6402-4828-a366-1a9ea825a444\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jth29" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.665201 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f9310c18-e995-4f6b-b49a-c0cd2f574506-csi-data-dir\") pod \"csi-hostpathplugin-rsdzb\" (UID: \"f9310c18-e995-4f6b-b49a-c0cd2f574506\") " pod="hostpath-provisioner/csi-hostpathplugin-rsdzb" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.665215 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dfea4e8f-4296-4aad-97b3-0d571e9c527c-proxy-tls\") pod \"machine-config-operator-74547568cd-99jq8\" (UID: \"dfea4e8f-4296-4aad-97b3-0d571e9c527c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-99jq8" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.665230 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd7rw\" (UniqueName: \"kubernetes.io/projected/81abe77a-25d5-4fe5-a592-e83853be1b63-kube-api-access-wd7rw\") pod \"marketplace-operator-79b997595-9xz4k\" (UID: \"81abe77a-25d5-4fe5-a592-e83853be1b63\") " pod="openshift-marketplace/marketplace-operator-79b997595-9xz4k" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.665247 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ddfa4269-a259-466c-b0fd-be3ae32849ee-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gtv4l\" (UID: \"ddfa4269-a259-466c-b0fd-be3ae32849ee\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gtv4l" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.665261 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4af79305-dc34-45ee-a27d-f8a6f08ab9c4-config\") pod \"kube-controller-manager-operator-78b949d7b-bwvjw\" (UID: \"4af79305-dc34-45ee-a27d-f8a6f08ab9c4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bwvjw" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.665286 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7m7t5\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.665311 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzcls\" (UniqueName: \"kubernetes.io/projected/691167fd-4218-4be3-bd41-39486e614ab4-kube-api-access-dzcls\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.665325 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83bfe431-00e7-4ce9-a648-6e5634edfd3f-serving-cert\") pod \"console-operator-58897d9998-r8z8z\" (UID: \"83bfe431-00e7-4ce9-a648-6e5634edfd3f\") " pod="openshift-console-operator/console-operator-58897d9998-r8z8z" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.665342 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0dc408c-7d0c-4c18-a500-bcffb273b616-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-nq4gt\" (UID: \"e0dc408c-7d0c-4c18-a500-bcffb273b616\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nq4gt" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.665361 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11dc6c8b-70db-4d88-8792-a4a100330c8a-config\") pod \"authentication-operator-69f744f599-sw2b4\" (UID: \"11dc6c8b-70db-4d88-8792-a4a100330c8a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sw2b4" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.665376 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46425fd6-9499-4e9a-8450-3fafbe2c6611-config\") pod \"route-controller-manager-6576b87f9c-6pxdl\" (UID: \"46425fd6-9499-4e9a-8450-3fafbe2c6611\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6pxdl" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.665395 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a5b455ef-fa69-453a-9c72-6e952faae9db-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9vrvn\" (UID: \"a5b455ef-fa69-453a-9c72-6e952faae9db\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9vrvn" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.665411 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cc175a81-0556-44fd-95b9-1ec1e9af0f69-srv-cert\") pod \"catalog-operator-68c6474976-p5f22\" (UID: \"cc175a81-0556-44fd-95b9-1ec1e9af0f69\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p5f22" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.665424 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/47fb3222-46ee-4e02-9a2d-ecc944492a83-tmpfs\") pod \"packageserver-d55dfcdfc-mntxm\" (UID: \"47fb3222-46ee-4e02-9a2d-ecc944492a83\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mntxm" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.665440 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7m7t5\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.665455 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0dc408c-7d0c-4c18-a500-bcffb273b616-config\") pod \"kube-apiserver-operator-766d6c64bb-nq4gt\" (UID: \"e0dc408c-7d0c-4c18-a500-bcffb273b616\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nq4gt" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.665470 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f9310c18-e995-4f6b-b49a-c0cd2f574506-mountpoint-dir\") pod \"csi-hostpathplugin-rsdzb\" (UID: \"f9310c18-e995-4f6b-b49a-c0cd2f574506\") " pod="hostpath-provisioner/csi-hostpathplugin-rsdzb" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.665486 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r96dl\" (UniqueName: \"kubernetes.io/projected/e80fb785-eead-4ebd-9e0c-0f5c548c257c-kube-api-access-r96dl\") pod \"downloads-7954f5f757-k8pqq\" (UID: \"e80fb785-eead-4ebd-9e0c-0f5c548c257c\") " pod="openshift-console/downloads-7954f5f757-k8pqq" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.665501 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gnjq\" (UniqueName: \"kubernetes.io/projected/2953b651-522d-4749-8416-b2c1922c26b3-kube-api-access-9gnjq\") pod \"service-ca-9c57cc56f-lmn6t\" (UID: \"2953b651-522d-4749-8416-b2c1922c26b3\") " pod="openshift-service-ca/service-ca-9c57cc56f-lmn6t" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.665516 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7m7t5\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.665537 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46425fd6-9499-4e9a-8450-3fafbe2c6611-client-ca\") pod \"route-controller-manager-6576b87f9c-6pxdl\" (UID: \"46425fd6-9499-4e9a-8450-3fafbe2c6611\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6pxdl" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.665554 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8b814f1-a5f8-4349-8695-deb0ef824ff8-serving-cert\") pod \"service-ca-operator-777779d784-lk9zk\" (UID: \"a8b814f1-a5f8-4349-8695-deb0ef824ff8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lk9zk" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.665580 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flz4q\" (UniqueName: \"kubernetes.io/projected/f9310c18-e995-4f6b-b49a-c0cd2f574506-kube-api-access-flz4q\") pod \"csi-hostpathplugin-rsdzb\" (UID: \"f9310c18-e995-4f6b-b49a-c0cd2f574506\") " pod="hostpath-provisioner/csi-hostpathplugin-rsdzb" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.665596 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0480edbf-2598-4ac3-ab98-57a7e58f8cc0-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-7hhdh\" (UID: \"0480edbf-2598-4ac3-ab98-57a7e58f8cc0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7hhdh" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.665614 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89gt9\" (UniqueName: \"kubernetes.io/projected/66b52bf0-b7fc-4518-a958-64995f5c00b0-kube-api-access-89gt9\") pod \"dns-default-ckdgn\" (UID: \"66b52bf0-b7fc-4518-a958-64995f5c00b0\") " pod="openshift-dns/dns-default-ckdgn" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.665634 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11dc6c8b-70db-4d88-8792-a4a100330c8a-service-ca-bundle\") pod \"authentication-operator-69f744f599-sw2b4\" (UID: \"11dc6c8b-70db-4d88-8792-a4a100330c8a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sw2b4" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.665666 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pbvl\" (UniqueName: \"kubernetes.io/projected/eb78bcb9-c424-4592-99c4-e1a4d711d81b-kube-api-access-8pbvl\") pod \"ingress-canary-5wrch\" (UID: \"eb78bcb9-c424-4592-99c4-e1a4d711d81b\") " pod="openshift-ingress-canary/ingress-canary-5wrch" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.665693 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktzlp\" (UniqueName: \"kubernetes.io/projected/5ea79d8a-686d-411b-8996-82a2e2a669fe-kube-api-access-ktzlp\") pod \"ingress-operator-5b745b69d9-8p8fg\" (UID: \"5ea79d8a-686d-411b-8996-82a2e2a669fe\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8p8fg" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.665710 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3fa10d48-6402-4828-a366-1a9ea825a444-serving-cert\") pod \"openshift-config-operator-7777fb866f-jth29\" (UID: \"3fa10d48-6402-4828-a366-1a9ea825a444\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jth29" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.665727 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4af79305-dc34-45ee-a27d-f8a6f08ab9c4-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-bwvjw\" (UID: \"4af79305-dc34-45ee-a27d-f8a6f08ab9c4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bwvjw" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.665744 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv94c\" (UniqueName: \"kubernetes.io/projected/47fb3222-46ee-4e02-9a2d-ecc944492a83-kube-api-access-gv94c\") pod \"packageserver-d55dfcdfc-mntxm\" (UID: \"47fb3222-46ee-4e02-9a2d-ecc944492a83\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mntxm" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.665763 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7m7t5\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.665779 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7m7t5\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.665798 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8ad9a195-3fae-4ba2-a2c6-577888216124-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8gmh8\" (UID: \"8ad9a195-3fae-4ba2-a2c6-577888216124\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8gmh8" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.665921 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7m7t5\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.666012 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xdk6\" (UniqueName: \"kubernetes.io/projected/47d61fbc-8622-45c3-a0fd-b52320050c34-kube-api-access-9xdk6\") pod \"cluster-samples-operator-665b6dd947-zgqjc\" (UID: \"47d61fbc-8622-45c3-a0fd-b52320050c34\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zgqjc" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.666063 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8snh8\" (UniqueName: \"kubernetes.io/projected/9ba81b6e-6d73-4dc4-90ba-4690b971f882-kube-api-access-8snh8\") pod \"collect-profiles-29501295-v5q6w\" (UID: \"9ba81b6e-6d73-4dc4-90ba-4690b971f882\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501295-v5q6w" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.666105 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48wc9\" (UniqueName: \"kubernetes.io/projected/3f392f5f-bc9d-4730-ba92-c9083f55f6e9-kube-api-access-48wc9\") pod \"olm-operator-6b444d44fb-4ptdj\" (UID: \"3f392f5f-bc9d-4730-ba92-c9083f55f6e9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4ptdj" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.666139 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0dc408c-7d0c-4c18-a500-bcffb273b616-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-nq4gt\" (UID: \"e0dc408c-7d0c-4c18-a500-bcffb273b616\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nq4gt" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.666168 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2953b651-522d-4749-8416-b2c1922c26b3-signing-cabundle\") pod \"service-ca-9c57cc56f-lmn6t\" (UID: \"2953b651-522d-4749-8416-b2c1922c26b3\") " pod="openshift-service-ca/service-ca-9c57cc56f-lmn6t" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.666540 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11dc6c8b-70db-4d88-8792-a4a100330c8a-service-ca-bundle\") pod \"authentication-operator-69f744f599-sw2b4\" (UID: \"11dc6c8b-70db-4d88-8792-a4a100330c8a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sw2b4" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.667472 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7m7t5\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.667821 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11dc6c8b-70db-4d88-8792-a4a100330c8a-config\") pod \"authentication-operator-69f744f599-sw2b4\" (UID: \"11dc6c8b-70db-4d88-8792-a4a100330c8a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sw2b4" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.668812 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a963b867-b963-4b9e-abe9-47088ff98bea-service-ca-bundle\") pod \"router-default-5444994796-4c8fd\" (UID: \"a963b867-b963-4b9e-abe9-47088ff98bea\") " pod="openshift-ingress/router-default-5444994796-4c8fd" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.669883 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tpkwn"] Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.670400 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0dc408c-7d0c-4c18-a500-bcffb273b616-config\") pod \"kube-apiserver-operator-766d6c64bb-nq4gt\" (UID: \"e0dc408c-7d0c-4c18-a500-bcffb273b616\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nq4gt" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.670853 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/691167fd-4218-4be3-bd41-39486e614ab4-trusted-ca\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.672258 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7m7t5\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.672320 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2953b651-522d-4749-8416-b2c1922c26b3-signing-key\") pod \"service-ca-9c57cc56f-lmn6t\" (UID: \"2953b651-522d-4749-8416-b2c1922c26b3\") " pod="openshift-service-ca/service-ca-9c57cc56f-lmn6t" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.672361 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7m7t5\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.672422 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/841ea4a0-8ab8-4a72-96a8-40578497e9c4-etcd-client\") pod \"etcd-operator-b45778765-th92f\" (UID: \"841ea4a0-8ab8-4a72-96a8-40578497e9c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-th92f" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.672449 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b35482f6-84f0-4bbc-8bd4-2787b5912589-metrics-tls\") pod \"dns-operator-744455d44c-nwr6q\" (UID: \"b35482f6-84f0-4bbc-8bd4-2787b5912589\") " pod="openshift-dns-operator/dns-operator-744455d44c-nwr6q" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.672475 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1e0d80fe-a3b6-4d24-9353-fe1ed0df2da5-certs\") pod \"machine-config-server-jxs7f\" (UID: \"1e0d80fe-a3b6-4d24-9353-fe1ed0df2da5\") " pod="openshift-machine-config-operator/machine-config-server-jxs7f" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.672504 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83bfe431-00e7-4ce9-a648-6e5634edfd3f-trusted-ca\") pod \"console-operator-58897d9998-r8z8z\" (UID: \"83bfe431-00e7-4ce9-a648-6e5634edfd3f\") " pod="openshift-console-operator/console-operator-58897d9998-r8z8z" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.672531 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/81abe77a-25d5-4fe5-a592-e83853be1b63-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9xz4k\" (UID: \"81abe77a-25d5-4fe5-a592-e83853be1b63\") " pod="openshift-marketplace/marketplace-operator-79b997595-9xz4k" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.673885 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7m7t5\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.674772 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3fa10d48-6402-4828-a366-1a9ea825a444-serving-cert\") pod \"openshift-config-operator-7777fb866f-jth29\" (UID: \"3fa10d48-6402-4828-a366-1a9ea825a444\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jth29" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.675017 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46425fd6-9499-4e9a-8450-3fafbe2c6611-client-ca\") pod \"route-controller-manager-6576b87f9c-6pxdl\" (UID: \"46425fd6-9499-4e9a-8450-3fafbe2c6611\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6pxdl" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.676102 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7m7t5\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.676732 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7m7t5\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.676853 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/691167fd-4218-4be3-bd41-39486e614ab4-registry-tls\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.676932 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3f392f5f-bc9d-4730-ba92-c9083f55f6e9-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4ptdj\" (UID: \"3f392f5f-bc9d-4730-ba92-c9083f55f6e9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4ptdj" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.676976 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/67a133bc-8ad5-4088-b5ec-122ec4f32c4d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-t2s9r\" (UID: \"67a133bc-8ad5-4088-b5ec-122ec4f32c4d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t2s9r" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.677039 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9c9866e-cbf6-47d0-91f1-25412166eb4f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9v9rv\" (UID: \"c9c9866e-cbf6-47d0-91f1-25412166eb4f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9v9rv" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.677070 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f0fa30af-feba-4653-8b34-bfc34eed90da-proxy-tls\") pod \"machine-config-controller-84d6567774-lzsmv\" (UID: \"f0fa30af-feba-4653-8b34-bfc34eed90da\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lzsmv" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.677099 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5ea79d8a-686d-411b-8996-82a2e2a669fe-trusted-ca\") pod \"ingress-operator-5b745b69d9-8p8fg\" (UID: \"5ea79d8a-686d-411b-8996-82a2e2a669fe\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8p8fg" Feb 03 00:17:29 crc kubenswrapper[4798]: E0203 00:17:29.678132 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:17:30.178107718 +0000 UTC m=+141.944097879 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.678221 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83bfe431-00e7-4ce9-a648-6e5634edfd3f-trusted-ca\") pod \"console-operator-58897d9998-r8z8z\" (UID: \"83bfe431-00e7-4ce9-a648-6e5634edfd3f\") " pod="openshift-console-operator/console-operator-58897d9998-r8z8z" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.678335 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/691167fd-4218-4be3-bd41-39486e614ab4-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.678372 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4af79305-dc34-45ee-a27d-f8a6f08ab9c4-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-bwvjw\" (UID: \"4af79305-dc34-45ee-a27d-f8a6f08ab9c4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bwvjw" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.678431 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a963b867-b963-4b9e-abe9-47088ff98bea-default-certificate\") pod \"router-default-5444994796-4c8fd\" (UID: \"a963b867-b963-4b9e-abe9-47088ff98bea\") " pod="openshift-ingress/router-default-5444994796-4c8fd" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.678463 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/47fb3222-46ee-4e02-9a2d-ecc944492a83-webhook-cert\") pod \"packageserver-d55dfcdfc-mntxm\" (UID: \"47fb3222-46ee-4e02-9a2d-ecc944492a83\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mntxm" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.678540 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83bfe431-00e7-4ce9-a648-6e5634edfd3f-serving-cert\") pod \"console-operator-58897d9998-r8z8z\" (UID: \"83bfe431-00e7-4ce9-a648-6e5634edfd3f\") " pod="openshift-console-operator/console-operator-58897d9998-r8z8z" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.678571 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46425fd6-9499-4e9a-8450-3fafbe2c6611-config\") pod \"route-controller-manager-6576b87f9c-6pxdl\" (UID: \"46425fd6-9499-4e9a-8450-3fafbe2c6611\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6pxdl" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.678723 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7m7t5\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.678779 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11dc6c8b-70db-4d88-8792-a4a100330c8a-serving-cert\") pod \"authentication-operator-69f744f599-sw2b4\" (UID: \"11dc6c8b-70db-4d88-8792-a4a100330c8a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sw2b4" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.678809 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1e0d80fe-a3b6-4d24-9353-fe1ed0df2da5-node-bootstrap-token\") pod \"machine-config-server-jxs7f\" (UID: \"1e0d80fe-a3b6-4d24-9353-fe1ed0df2da5\") " pod="openshift-machine-config-operator/machine-config-server-jxs7f" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.678862 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmbbx\" (UniqueName: \"kubernetes.io/projected/46425fd6-9499-4e9a-8450-3fafbe2c6611-kube-api-access-fmbbx\") pod \"route-controller-manager-6576b87f9c-6pxdl\" (UID: \"46425fd6-9499-4e9a-8450-3fafbe2c6611\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6pxdl" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.678871 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/691167fd-4218-4be3-bd41-39486e614ab4-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.678884 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cxg8\" (UniqueName: \"kubernetes.io/projected/11dc6c8b-70db-4d88-8792-a4a100330c8a-kube-api-access-2cxg8\") pod \"authentication-operator-69f744f599-sw2b4\" (UID: \"11dc6c8b-70db-4d88-8792-a4a100330c8a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sw2b4" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.678939 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ba81b6e-6d73-4dc4-90ba-4690b971f882-config-volume\") pod \"collect-profiles-29501295-v5q6w\" (UID: \"9ba81b6e-6d73-4dc4-90ba-4690b971f882\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501295-v5q6w" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.678973 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-audit-policies\") pod \"oauth-openshift-558db77b4-7m7t5\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.679006 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ef7f31fe-b09b-4388-a076-1a287c202292-serviceca\") pod \"image-pruner-29501280-nhgw5\" (UID: \"ef7f31fe-b09b-4388-a076-1a287c202292\") " pod="openshift-image-registry/image-pruner-29501280-nhgw5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.679084 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddfa4269-a259-466c-b0fd-be3ae32849ee-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gtv4l\" (UID: \"ddfa4269-a259-466c-b0fd-be3ae32849ee\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gtv4l" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.679961 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ef7f31fe-b09b-4388-a076-1a287c202292-serviceca\") pod \"image-pruner-29501280-nhgw5\" (UID: \"ef7f31fe-b09b-4388-a076-1a287c202292\") " pod="openshift-image-registry/image-pruner-29501280-nhgw5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.680020 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddfa4269-a259-466c-b0fd-be3ae32849ee-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gtv4l\" (UID: \"ddfa4269-a259-466c-b0fd-be3ae32849ee\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gtv4l" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.680997 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7m7t5\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.681811 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/841ea4a0-8ab8-4a72-96a8-40578497e9c4-etcd-client\") pod \"etcd-operator-b45778765-th92f\" (UID: \"841ea4a0-8ab8-4a72-96a8-40578497e9c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-th92f" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.682255 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/691167fd-4218-4be3-bd41-39486e614ab4-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.682331 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11dc6c8b-70db-4d88-8792-a4a100330c8a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-sw2b4\" (UID: \"11dc6c8b-70db-4d88-8792-a4a100330c8a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sw2b4" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.682393 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00003ad4-3d39-4600-ac7e-11b35b3d67c1-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-7hf94\" (UID: \"00003ad4-3d39-4600-ac7e-11b35b3d67c1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7hf94" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.682427 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4d4x\" (UniqueName: \"kubernetes.io/projected/00003ad4-3d39-4600-ac7e-11b35b3d67c1-kube-api-access-q4d4x\") pod \"openshift-controller-manager-operator-756b6f6bc6-7hf94\" (UID: \"00003ad4-3d39-4600-ac7e-11b35b3d67c1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7hf94" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.682456 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/81abe77a-25d5-4fe5-a592-e83853be1b63-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9xz4k\" (UID: \"81abe77a-25d5-4fe5-a592-e83853be1b63\") " pod="openshift-marketplace/marketplace-operator-79b997595-9xz4k" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.682507 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/841ea4a0-8ab8-4a72-96a8-40578497e9c4-serving-cert\") pod \"etcd-operator-b45778765-th92f\" (UID: \"841ea4a0-8ab8-4a72-96a8-40578497e9c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-th92f" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.682539 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ad9a195-3fae-4ba2-a2c6-577888216124-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8gmh8\" (UID: \"8ad9a195-3fae-4ba2-a2c6-577888216124\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8gmh8" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.682575 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hpxb\" (UniqueName: \"kubernetes.io/projected/0480edbf-2598-4ac3-ab98-57a7e58f8cc0-kube-api-access-7hpxb\") pod \"kube-storage-version-migrator-operator-b67b599dd-7hhdh\" (UID: \"0480edbf-2598-4ac3-ab98-57a7e58f8cc0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7hhdh" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.682608 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dfea4e8f-4296-4aad-97b3-0d571e9c527c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-99jq8\" (UID: \"dfea4e8f-4296-4aad-97b3-0d571e9c527c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-99jq8" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.682633 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg8wl\" (UniqueName: \"kubernetes.io/projected/dfea4e8f-4296-4aad-97b3-0d571e9c527c-kube-api-access-tg8wl\") pod \"machine-config-operator-74547568cd-99jq8\" (UID: \"dfea4e8f-4296-4aad-97b3-0d571e9c527c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-99jq8" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.682566 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ddfa4269-a259-466c-b0fd-be3ae32849ee-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gtv4l\" (UID: \"ddfa4269-a259-466c-b0fd-be3ae32849ee\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gtv4l" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.682739 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5ea79d8a-686d-411b-8996-82a2e2a669fe-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8p8fg\" (UID: \"5ea79d8a-686d-411b-8996-82a2e2a669fe\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8p8fg" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.682776 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-audit-dir\") pod \"oauth-openshift-558db77b4-7m7t5\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.682809 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66b52bf0-b7fc-4518-a958-64995f5c00b0-config-volume\") pod \"dns-default-ckdgn\" (UID: \"66b52bf0-b7fc-4518-a958-64995f5c00b0\") " pod="openshift-dns/dns-default-ckdgn" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.682837 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ba81b6e-6d73-4dc4-90ba-4690b971f882-secret-volume\") pod \"collect-profiles-29501295-v5q6w\" (UID: \"9ba81b6e-6d73-4dc4-90ba-4690b971f882\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501295-v5q6w" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.682862 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hth4h\" (UniqueName: \"kubernetes.io/projected/a8b814f1-a5f8-4349-8695-deb0ef824ff8-kube-api-access-hth4h\") pod \"service-ca-operator-777779d784-lk9zk\" (UID: \"a8b814f1-a5f8-4349-8695-deb0ef824ff8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lk9zk" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.682898 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83bfe431-00e7-4ce9-a648-6e5634edfd3f-config\") pod \"console-operator-58897d9998-r8z8z\" (UID: \"83bfe431-00e7-4ce9-a648-6e5634edfd3f\") " pod="openshift-console-operator/console-operator-58897d9998-r8z8z" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.683286 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a963b867-b963-4b9e-abe9-47088ff98bea-default-certificate\") pod \"router-default-5444994796-4c8fd\" (UID: \"a963b867-b963-4b9e-abe9-47088ff98bea\") " pod="openshift-ingress/router-default-5444994796-4c8fd" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.683704 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11dc6c8b-70db-4d88-8792-a4a100330c8a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-sw2b4\" (UID: \"11dc6c8b-70db-4d88-8792-a4a100330c8a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sw2b4" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.683763 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-audit-dir\") pod \"oauth-openshift-558db77b4-7m7t5\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.683818 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/691167fd-4218-4be3-bd41-39486e614ab4-bound-sa-token\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.683851 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ddfa4269-a259-466c-b0fd-be3ae32849ee-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gtv4l\" (UID: \"ddfa4269-a259-466c-b0fd-be3ae32849ee\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gtv4l" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.683941 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/841ea4a0-8ab8-4a72-96a8-40578497e9c4-etcd-service-ca\") pod \"etcd-operator-b45778765-th92f\" (UID: \"841ea4a0-8ab8-4a72-96a8-40578497e9c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-th92f" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.683971 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0480edbf-2598-4ac3-ab98-57a7e58f8cc0-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-7hhdh\" (UID: \"0480edbf-2598-4ac3-ab98-57a7e58f8cc0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7hhdh" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.684496 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83bfe431-00e7-4ce9-a648-6e5634edfd3f-config\") pod \"console-operator-58897d9998-r8z8z\" (UID: \"83bfe431-00e7-4ce9-a648-6e5634edfd3f\") " pod="openshift-console-operator/console-operator-58897d9998-r8z8z" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.684787 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/841ea4a0-8ab8-4a72-96a8-40578497e9c4-etcd-service-ca\") pod \"etcd-operator-b45778765-th92f\" (UID: \"841ea4a0-8ab8-4a72-96a8-40578497e9c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-th92f" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.684830 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11dc6c8b-70db-4d88-8792-a4a100330c8a-serving-cert\") pod \"authentication-operator-69f744f599-sw2b4\" (UID: \"11dc6c8b-70db-4d88-8792-a4a100330c8a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sw2b4" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.684837 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbpdl\" (UniqueName: \"kubernetes.io/projected/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-kube-api-access-hbpdl\") pod \"oauth-openshift-558db77b4-7m7t5\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.684894 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8b814f1-a5f8-4349-8695-deb0ef824ff8-config\") pod \"service-ca-operator-777779d784-lk9zk\" (UID: \"a8b814f1-a5f8-4349-8695-deb0ef824ff8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lk9zk" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.684934 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.684962 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/841ea4a0-8ab8-4a72-96a8-40578497e9c4-etcd-ca\") pod \"etcd-operator-b45778765-th92f\" (UID: \"841ea4a0-8ab8-4a72-96a8-40578497e9c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-th92f" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.685065 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6zjp\" (UniqueName: \"kubernetes.io/projected/c9c9866e-cbf6-47d0-91f1-25412166eb4f-kube-api-access-f6zjp\") pod \"package-server-manager-789f6589d5-9v9rv\" (UID: \"c9c9866e-cbf6-47d0-91f1-25412166eb4f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9v9rv" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.685094 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/66b52bf0-b7fc-4518-a958-64995f5c00b0-metrics-tls\") pod \"dns-default-ckdgn\" (UID: \"66b52bf0-b7fc-4518-a958-64995f5c00b0\") " pod="openshift-dns/dns-default-ckdgn" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.685129 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/691167fd-4218-4be3-bd41-39486e614ab4-registry-certificates\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.685154 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cc175a81-0556-44fd-95b9-1ec1e9af0f69-profile-collector-cert\") pod \"catalog-operator-68c6474976-p5f22\" (UID: \"cc175a81-0556-44fd-95b9-1ec1e9af0f69\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p5f22" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.685182 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7m7t5\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.685254 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00003ad4-3d39-4600-ac7e-11b35b3d67c1-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-7hf94\" (UID: \"00003ad4-3d39-4600-ac7e-11b35b3d67c1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7hf94" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.685315 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7m7t5\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.685344 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eb78bcb9-c424-4592-99c4-e1a4d711d81b-cert\") pod \"ingress-canary-5wrch\" (UID: \"eb78bcb9-c424-4592-99c4-e1a4d711d81b\") " pod="openshift-ingress-canary/ingress-canary-5wrch" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.685416 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5ea79d8a-686d-411b-8996-82a2e2a669fe-metrics-tls\") pod \"ingress-operator-5b745b69d9-8p8fg\" (UID: \"5ea79d8a-686d-411b-8996-82a2e2a669fe\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8p8fg" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.685453 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f9310c18-e995-4f6b-b49a-c0cd2f574506-socket-dir\") pod \"csi-hostpathplugin-rsdzb\" (UID: \"f9310c18-e995-4f6b-b49a-c0cd2f574506\") " pod="hostpath-provisioner/csi-hostpathplugin-rsdzb" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.685485 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z88z\" (UniqueName: \"kubernetes.io/projected/67a133bc-8ad5-4088-b5ec-122ec4f32c4d-kube-api-access-8z88z\") pod \"control-plane-machine-set-operator-78cbb6b69f-t2s9r\" (UID: \"67a133bc-8ad5-4088-b5ec-122ec4f32c4d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t2s9r" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.685519 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m69v\" (UniqueName: \"kubernetes.io/projected/b35482f6-84f0-4bbc-8bd4-2787b5912589-kube-api-access-5m69v\") pod \"dns-operator-744455d44c-nwr6q\" (UID: \"b35482f6-84f0-4bbc-8bd4-2787b5912589\") " pod="openshift-dns-operator/dns-operator-744455d44c-nwr6q" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.685550 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dfea4e8f-4296-4aad-97b3-0d571e9c527c-images\") pod \"machine-config-operator-74547568cd-99jq8\" (UID: \"dfea4e8f-4296-4aad-97b3-0d571e9c527c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-99jq8" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.685578 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5ll8\" (UniqueName: \"kubernetes.io/projected/a5b455ef-fa69-453a-9c72-6e952faae9db-kube-api-access-f5ll8\") pod \"multus-admission-controller-857f4d67dd-9vrvn\" (UID: \"a5b455ef-fa69-453a-9c72-6e952faae9db\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9vrvn" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.685610 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f0fa30af-feba-4653-8b34-bfc34eed90da-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lzsmv\" (UID: \"f0fa30af-feba-4653-8b34-bfc34eed90da\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lzsmv" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.685636 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a963b867-b963-4b9e-abe9-47088ff98bea-stats-auth\") pod \"router-default-5444994796-4c8fd\" (UID: \"a963b867-b963-4b9e-abe9-47088ff98bea\") " pod="openshift-ingress/router-default-5444994796-4c8fd" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.685709 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzczm\" (UniqueName: \"kubernetes.io/projected/cc175a81-0556-44fd-95b9-1ec1e9af0f69-kube-api-access-qzczm\") pod \"catalog-operator-68c6474976-p5f22\" (UID: \"cc175a81-0556-44fd-95b9-1ec1e9af0f69\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p5f22" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.685737 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdj4v\" (UniqueName: \"kubernetes.io/projected/a2481f1c-dd12-4275-be76-110b4ad35541-kube-api-access-vdj4v\") pod \"migrator-59844c95c7-qdl4h\" (UID: \"a2481f1c-dd12-4275-be76-110b4ad35541\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdl4h" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.686849 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7m7t5\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.688220 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/841ea4a0-8ab8-4a72-96a8-40578497e9c4-etcd-ca\") pod \"etcd-operator-b45778765-th92f\" (UID: \"841ea4a0-8ab8-4a72-96a8-40578497e9c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-th92f" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.689266 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7m7t5\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.690237 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/47d61fbc-8622-45c3-a0fd-b52320050c34-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-zgqjc\" (UID: \"47d61fbc-8622-45c3-a0fd-b52320050c34\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zgqjc" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.691124 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0dc408c-7d0c-4c18-a500-bcffb273b616-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-nq4gt\" (UID: \"e0dc408c-7d0c-4c18-a500-bcffb273b616\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nq4gt" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.691280 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-audit-policies\") pod \"oauth-openshift-558db77b4-7m7t5\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.696905 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/691167fd-4218-4be3-bd41-39486e614ab4-registry-tls\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:29 crc kubenswrapper[4798]: E0203 00:17:29.688456 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 00:17:30.188437894 +0000 UTC m=+141.954428075 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxw86" (UID: "691167fd-4218-4be3-bd41-39486e614ab4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.698792 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd8vc\" (UniqueName: \"kubernetes.io/projected/ef7f31fe-b09b-4388-a076-1a287c202292-kube-api-access-fd8vc\") pod \"image-pruner-29501280-nhgw5\" (UID: \"ef7f31fe-b09b-4388-a076-1a287c202292\") " pod="openshift-image-registry/image-pruner-29501280-nhgw5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.698851 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx9v9\" (UniqueName: \"kubernetes.io/projected/a963b867-b963-4b9e-abe9-47088ff98bea-kube-api-access-gx9v9\") pod \"router-default-5444994796-4c8fd\" (UID: \"a963b867-b963-4b9e-abe9-47088ff98bea\") " pod="openshift-ingress/router-default-5444994796-4c8fd" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.698897 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8t2w\" (UniqueName: \"kubernetes.io/projected/f0fa30af-feba-4653-8b34-bfc34eed90da-kube-api-access-d8t2w\") pod \"machine-config-controller-84d6567774-lzsmv\" (UID: \"f0fa30af-feba-4653-8b34-bfc34eed90da\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lzsmv" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.698929 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f9310c18-e995-4f6b-b49a-c0cd2f574506-registration-dir\") pod \"csi-hostpathplugin-rsdzb\" (UID: \"f9310c18-e995-4f6b-b49a-c0cd2f574506\") " pod="hostpath-provisioner/csi-hostpathplugin-rsdzb" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.699019 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3fa10d48-6402-4828-a366-1a9ea825a444-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jth29\" (UID: \"3fa10d48-6402-4828-a366-1a9ea825a444\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jth29" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.699056 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/841ea4a0-8ab8-4a72-96a8-40578497e9c4-config\") pod \"etcd-operator-b45778765-th92f\" (UID: \"841ea4a0-8ab8-4a72-96a8-40578497e9c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-th92f" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.699089 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvk7w\" (UniqueName: \"kubernetes.io/projected/8ad9a195-3fae-4ba2-a2c6-577888216124-kube-api-access-wvk7w\") pod \"cluster-image-registry-operator-dc59b4c8b-8gmh8\" (UID: \"8ad9a195-3fae-4ba2-a2c6-577888216124\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8gmh8" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.699151 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a963b867-b963-4b9e-abe9-47088ff98bea-metrics-certs\") pod \"router-default-5444994796-4c8fd\" (UID: \"a963b867-b963-4b9e-abe9-47088ff98bea\") " pod="openshift-ingress/router-default-5444994796-4c8fd" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.699179 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/47fb3222-46ee-4e02-9a2d-ecc944492a83-apiservice-cert\") pod \"packageserver-d55dfcdfc-mntxm\" (UID: \"47fb3222-46ee-4e02-9a2d-ecc944492a83\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mntxm" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.699184 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/841ea4a0-8ab8-4a72-96a8-40578497e9c4-serving-cert\") pod \"etcd-operator-b45778765-th92f\" (UID: \"841ea4a0-8ab8-4a72-96a8-40578497e9c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-th92f" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.699211 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3f392f5f-bc9d-4730-ba92-c9083f55f6e9-srv-cert\") pod \"olm-operator-6b444d44fb-4ptdj\" (UID: \"3f392f5f-bc9d-4730-ba92-c9083f55f6e9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4ptdj" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.699244 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f9310c18-e995-4f6b-b49a-c0cd2f574506-plugins-dir\") pod \"csi-hostpathplugin-rsdzb\" (UID: \"f9310c18-e995-4f6b-b49a-c0cd2f574506\") " pod="hostpath-provisioner/csi-hostpathplugin-rsdzb" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.699278 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46425fd6-9499-4e9a-8450-3fafbe2c6611-serving-cert\") pod \"route-controller-manager-6576b87f9c-6pxdl\" (UID: \"46425fd6-9499-4e9a-8450-3fafbe2c6611\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6pxdl" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.699299 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/691167fd-4218-4be3-bd41-39486e614ab4-registry-certificates\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.699310 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk6js\" (UniqueName: \"kubernetes.io/projected/1e0d80fe-a3b6-4d24-9353-fe1ed0df2da5-kube-api-access-qk6js\") pod \"machine-config-server-jxs7f\" (UID: \"1e0d80fe-a3b6-4d24-9353-fe1ed0df2da5\") " pod="openshift-machine-config-operator/machine-config-server-jxs7f" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.699402 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ad9a195-3fae-4ba2-a2c6-577888216124-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8gmh8\" (UID: \"8ad9a195-3fae-4ba2-a2c6-577888216124\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8gmh8" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.699484 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7m7t5\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.700366 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3fa10d48-6402-4828-a366-1a9ea825a444-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jth29\" (UID: \"3fa10d48-6402-4828-a366-1a9ea825a444\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jth29" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.700983 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/841ea4a0-8ab8-4a72-96a8-40578497e9c4-config\") pod \"etcd-operator-b45778765-th92f\" (UID: \"841ea4a0-8ab8-4a72-96a8-40578497e9c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-th92f" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.701617 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/691167fd-4218-4be3-bd41-39486e614ab4-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.702218 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7m7t5\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.705401 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ad9a195-3fae-4ba2-a2c6-577888216124-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8gmh8\" (UID: \"8ad9a195-3fae-4ba2-a2c6-577888216124\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8gmh8" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.707621 4798 generic.go:334] "Generic (PLEG): container finished" podID="ff545dc5-468e-410c-aacb-2c26ef11274e" containerID="f029dcd11c1501b5df3cabcec2cc3ccb76e320c7e11054fa63070fc46a07e0c8" exitCode=0 Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.707744 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-28wct" event={"ID":"ff545dc5-468e-410c-aacb-2c26ef11274e","Type":"ContainerDied","Data":"f029dcd11c1501b5df3cabcec2cc3ccb76e320c7e11054fa63070fc46a07e0c8"} Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.708751 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46425fd6-9499-4e9a-8450-3fafbe2c6611-serving-cert\") pod \"route-controller-manager-6576b87f9c-6pxdl\" (UID: \"46425fd6-9499-4e9a-8450-3fafbe2c6611\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6pxdl" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.714138 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a963b867-b963-4b9e-abe9-47088ff98bea-stats-auth\") pod \"router-default-5444994796-4c8fd\" (UID: \"a963b867-b963-4b9e-abe9-47088ff98bea\") " pod="openshift-ingress/router-default-5444994796-4c8fd" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.716419 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ghhvz" event={"ID":"5c3fbdb3-7b79-4103-83d4-f2d051890fec","Type":"ContainerStarted","Data":"1bfea8114809037aedfaed53115e80fe37ef6594a484728f2430bc0a84087af1"} Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.716731 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-dz2lb"] Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.717200 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a963b867-b963-4b9e-abe9-47088ff98bea-metrics-certs\") pod \"router-default-5444994796-4c8fd\" (UID: \"a963b867-b963-4b9e-abe9-47088ff98bea\") " pod="openshift-ingress/router-default-5444994796-4c8fd" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.723946 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b35482f6-84f0-4bbc-8bd4-2787b5912589-metrics-tls\") pod \"dns-operator-744455d44c-nwr6q\" (UID: \"b35482f6-84f0-4bbc-8bd4-2787b5912589\") " pod="openshift-dns-operator/dns-operator-744455d44c-nwr6q" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.732808 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj6vc\" (UniqueName: \"kubernetes.io/projected/83bfe431-00e7-4ce9-a648-6e5634edfd3f-kube-api-access-hj6vc\") pod \"console-operator-58897d9998-r8z8z\" (UID: \"83bfe431-00e7-4ce9-a648-6e5634edfd3f\") " pod="openshift-console-operator/console-operator-58897d9998-r8z8z" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.735246 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7m7t5\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.735519 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8ad9a195-3fae-4ba2-a2c6-577888216124-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8gmh8\" (UID: \"8ad9a195-3fae-4ba2-a2c6-577888216124\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8gmh8" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.736388 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ad9a195-3fae-4ba2-a2c6-577888216124-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8gmh8\" (UID: \"8ad9a195-3fae-4ba2-a2c6-577888216124\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8gmh8" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.751232 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xdk6\" (UniqueName: \"kubernetes.io/projected/47d61fbc-8622-45c3-a0fd-b52320050c34-kube-api-access-9xdk6\") pod \"cluster-samples-operator-665b6dd947-zgqjc\" (UID: \"47d61fbc-8622-45c3-a0fd-b52320050c34\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zgqjc" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.762883 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-8tbx2"] Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.767301 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vx4h\" (UniqueName: \"kubernetes.io/projected/841ea4a0-8ab8-4a72-96a8-40578497e9c4-kube-api-access-6vx4h\") pod \"etcd-operator-b45778765-th92f\" (UID: \"841ea4a0-8ab8-4a72-96a8-40578497e9c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-th92f" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.769299 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qsvlx" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.783339 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nndhl\" (UniqueName: \"kubernetes.io/projected/3fa10d48-6402-4828-a366-1a9ea825a444-kube-api-access-nndhl\") pod \"openshift-config-operator-7777fb866f-jth29\" (UID: \"3fa10d48-6402-4828-a366-1a9ea825a444\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jth29" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.800008 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.800202 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0480edbf-2598-4ac3-ab98-57a7e58f8cc0-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-7hhdh\" (UID: \"0480edbf-2598-4ac3-ab98-57a7e58f8cc0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7hhdh" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.800255 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8b814f1-a5f8-4349-8695-deb0ef824ff8-config\") pod \"service-ca-operator-777779d784-lk9zk\" (UID: \"a8b814f1-a5f8-4349-8695-deb0ef824ff8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lk9zk" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.800290 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6zjp\" (UniqueName: \"kubernetes.io/projected/c9c9866e-cbf6-47d0-91f1-25412166eb4f-kube-api-access-f6zjp\") pod \"package-server-manager-789f6589d5-9v9rv\" (UID: \"c9c9866e-cbf6-47d0-91f1-25412166eb4f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9v9rv" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.800308 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cc175a81-0556-44fd-95b9-1ec1e9af0f69-profile-collector-cert\") pod \"catalog-operator-68c6474976-p5f22\" (UID: \"cc175a81-0556-44fd-95b9-1ec1e9af0f69\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p5f22" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.800325 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/66b52bf0-b7fc-4518-a958-64995f5c00b0-metrics-tls\") pod \"dns-default-ckdgn\" (UID: \"66b52bf0-b7fc-4518-a958-64995f5c00b0\") " pod="openshift-dns/dns-default-ckdgn" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.800342 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00003ad4-3d39-4600-ac7e-11b35b3d67c1-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-7hf94\" (UID: \"00003ad4-3d39-4600-ac7e-11b35b3d67c1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7hf94" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.800359 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eb78bcb9-c424-4592-99c4-e1a4d711d81b-cert\") pod \"ingress-canary-5wrch\" (UID: \"eb78bcb9-c424-4592-99c4-e1a4d711d81b\") " pod="openshift-ingress-canary/ingress-canary-5wrch" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.800376 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5ea79d8a-686d-411b-8996-82a2e2a669fe-metrics-tls\") pod \"ingress-operator-5b745b69d9-8p8fg\" (UID: \"5ea79d8a-686d-411b-8996-82a2e2a669fe\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8p8fg" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.800396 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f9310c18-e995-4f6b-b49a-c0cd2f574506-socket-dir\") pod \"csi-hostpathplugin-rsdzb\" (UID: \"f9310c18-e995-4f6b-b49a-c0cd2f574506\") " pod="hostpath-provisioner/csi-hostpathplugin-rsdzb" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.800416 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z88z\" (UniqueName: \"kubernetes.io/projected/67a133bc-8ad5-4088-b5ec-122ec4f32c4d-kube-api-access-8z88z\") pod \"control-plane-machine-set-operator-78cbb6b69f-t2s9r\" (UID: \"67a133bc-8ad5-4088-b5ec-122ec4f32c4d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t2s9r" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.800442 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dfea4e8f-4296-4aad-97b3-0d571e9c527c-images\") pod \"machine-config-operator-74547568cd-99jq8\" (UID: \"dfea4e8f-4296-4aad-97b3-0d571e9c527c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-99jq8" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.800458 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5ll8\" (UniqueName: \"kubernetes.io/projected/a5b455ef-fa69-453a-9c72-6e952faae9db-kube-api-access-f5ll8\") pod \"multus-admission-controller-857f4d67dd-9vrvn\" (UID: \"a5b455ef-fa69-453a-9c72-6e952faae9db\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9vrvn" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.800474 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f0fa30af-feba-4653-8b34-bfc34eed90da-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lzsmv\" (UID: \"f0fa30af-feba-4653-8b34-bfc34eed90da\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lzsmv" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.800504 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzczm\" (UniqueName: \"kubernetes.io/projected/cc175a81-0556-44fd-95b9-1ec1e9af0f69-kube-api-access-qzczm\") pod \"catalog-operator-68c6474976-p5f22\" (UID: \"cc175a81-0556-44fd-95b9-1ec1e9af0f69\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p5f22" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.800521 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdj4v\" (UniqueName: \"kubernetes.io/projected/a2481f1c-dd12-4275-be76-110b4ad35541-kube-api-access-vdj4v\") pod \"migrator-59844c95c7-qdl4h\" (UID: \"a2481f1c-dd12-4275-be76-110b4ad35541\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdl4h" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.800542 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8t2w\" (UniqueName: \"kubernetes.io/projected/f0fa30af-feba-4653-8b34-bfc34eed90da-kube-api-access-d8t2w\") pod \"machine-config-controller-84d6567774-lzsmv\" (UID: \"f0fa30af-feba-4653-8b34-bfc34eed90da\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lzsmv" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.800569 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f9310c18-e995-4f6b-b49a-c0cd2f574506-registration-dir\") pod \"csi-hostpathplugin-rsdzb\" (UID: \"f9310c18-e995-4f6b-b49a-c0cd2f574506\") " pod="hostpath-provisioner/csi-hostpathplugin-rsdzb" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.800593 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f9310c18-e995-4f6b-b49a-c0cd2f574506-plugins-dir\") pod \"csi-hostpathplugin-rsdzb\" (UID: \"f9310c18-e995-4f6b-b49a-c0cd2f574506\") " pod="hostpath-provisioner/csi-hostpathplugin-rsdzb" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.800611 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/47fb3222-46ee-4e02-9a2d-ecc944492a83-apiservice-cert\") pod \"packageserver-d55dfcdfc-mntxm\" (UID: \"47fb3222-46ee-4e02-9a2d-ecc944492a83\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mntxm" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.800628 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3f392f5f-bc9d-4730-ba92-c9083f55f6e9-srv-cert\") pod \"olm-operator-6b444d44fb-4ptdj\" (UID: \"3f392f5f-bc9d-4730-ba92-c9083f55f6e9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4ptdj" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.800724 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk6js\" (UniqueName: \"kubernetes.io/projected/1e0d80fe-a3b6-4d24-9353-fe1ed0df2da5-kube-api-access-qk6js\") pod \"machine-config-server-jxs7f\" (UID: \"1e0d80fe-a3b6-4d24-9353-fe1ed0df2da5\") " pod="openshift-machine-config-operator/machine-config-server-jxs7f" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.800764 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f9310c18-e995-4f6b-b49a-c0cd2f574506-csi-data-dir\") pod \"csi-hostpathplugin-rsdzb\" (UID: \"f9310c18-e995-4f6b-b49a-c0cd2f574506\") " pod="hostpath-provisioner/csi-hostpathplugin-rsdzb" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.800842 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dfea4e8f-4296-4aad-97b3-0d571e9c527c-proxy-tls\") pod \"machine-config-operator-74547568cd-99jq8\" (UID: \"dfea4e8f-4296-4aad-97b3-0d571e9c527c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-99jq8" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.800862 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd7rw\" (UniqueName: \"kubernetes.io/projected/81abe77a-25d5-4fe5-a592-e83853be1b63-kube-api-access-wd7rw\") pod \"marketplace-operator-79b997595-9xz4k\" (UID: \"81abe77a-25d5-4fe5-a592-e83853be1b63\") " pod="openshift-marketplace/marketplace-operator-79b997595-9xz4k" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.800890 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4af79305-dc34-45ee-a27d-f8a6f08ab9c4-config\") pod \"kube-controller-manager-operator-78b949d7b-bwvjw\" (UID: \"4af79305-dc34-45ee-a27d-f8a6f08ab9c4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bwvjw" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.800928 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a5b455ef-fa69-453a-9c72-6e952faae9db-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9vrvn\" (UID: \"a5b455ef-fa69-453a-9c72-6e952faae9db\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9vrvn" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.800948 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cc175a81-0556-44fd-95b9-1ec1e9af0f69-srv-cert\") pod \"catalog-operator-68c6474976-p5f22\" (UID: \"cc175a81-0556-44fd-95b9-1ec1e9af0f69\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p5f22" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.800965 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/47fb3222-46ee-4e02-9a2d-ecc944492a83-tmpfs\") pod \"packageserver-d55dfcdfc-mntxm\" (UID: \"47fb3222-46ee-4e02-9a2d-ecc944492a83\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mntxm" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.800963 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f9310c18-e995-4f6b-b49a-c0cd2f574506-socket-dir\") pod \"csi-hostpathplugin-rsdzb\" (UID: \"f9310c18-e995-4f6b-b49a-c0cd2f574506\") " pod="hostpath-provisioner/csi-hostpathplugin-rsdzb" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.800984 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f9310c18-e995-4f6b-b49a-c0cd2f574506-mountpoint-dir\") pod \"csi-hostpathplugin-rsdzb\" (UID: \"f9310c18-e995-4f6b-b49a-c0cd2f574506\") " pod="hostpath-provisioner/csi-hostpathplugin-rsdzb" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.801052 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f9310c18-e995-4f6b-b49a-c0cd2f574506-mountpoint-dir\") pod \"csi-hostpathplugin-rsdzb\" (UID: \"f9310c18-e995-4f6b-b49a-c0cd2f574506\") " pod="hostpath-provisioner/csi-hostpathplugin-rsdzb" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.801080 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gnjq\" (UniqueName: \"kubernetes.io/projected/2953b651-522d-4749-8416-b2c1922c26b3-kube-api-access-9gnjq\") pod \"service-ca-9c57cc56f-lmn6t\" (UID: \"2953b651-522d-4749-8416-b2c1922c26b3\") " pod="openshift-service-ca/service-ca-9c57cc56f-lmn6t" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.801106 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8b814f1-a5f8-4349-8695-deb0ef824ff8-serving-cert\") pod \"service-ca-operator-777779d784-lk9zk\" (UID: \"a8b814f1-a5f8-4349-8695-deb0ef824ff8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lk9zk" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.801114 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f9310c18-e995-4f6b-b49a-c0cd2f574506-registration-dir\") pod \"csi-hostpathplugin-rsdzb\" (UID: \"f9310c18-e995-4f6b-b49a-c0cd2f574506\") " pod="hostpath-provisioner/csi-hostpathplugin-rsdzb" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.801130 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flz4q\" (UniqueName: \"kubernetes.io/projected/f9310c18-e995-4f6b-b49a-c0cd2f574506-kube-api-access-flz4q\") pod \"csi-hostpathplugin-rsdzb\" (UID: \"f9310c18-e995-4f6b-b49a-c0cd2f574506\") " pod="hostpath-provisioner/csi-hostpathplugin-rsdzb" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.801153 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0480edbf-2598-4ac3-ab98-57a7e58f8cc0-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-7hhdh\" (UID: \"0480edbf-2598-4ac3-ab98-57a7e58f8cc0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7hhdh" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.801179 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89gt9\" (UniqueName: \"kubernetes.io/projected/66b52bf0-b7fc-4518-a958-64995f5c00b0-kube-api-access-89gt9\") pod \"dns-default-ckdgn\" (UID: \"66b52bf0-b7fc-4518-a958-64995f5c00b0\") " pod="openshift-dns/dns-default-ckdgn" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.801203 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pbvl\" (UniqueName: \"kubernetes.io/projected/eb78bcb9-c424-4592-99c4-e1a4d711d81b-kube-api-access-8pbvl\") pod \"ingress-canary-5wrch\" (UID: \"eb78bcb9-c424-4592-99c4-e1a4d711d81b\") " pod="openshift-ingress-canary/ingress-canary-5wrch" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.801221 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktzlp\" (UniqueName: \"kubernetes.io/projected/5ea79d8a-686d-411b-8996-82a2e2a669fe-kube-api-access-ktzlp\") pod \"ingress-operator-5b745b69d9-8p8fg\" (UID: \"5ea79d8a-686d-411b-8996-82a2e2a669fe\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8p8fg" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.801242 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4af79305-dc34-45ee-a27d-f8a6f08ab9c4-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-bwvjw\" (UID: \"4af79305-dc34-45ee-a27d-f8a6f08ab9c4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bwvjw" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.801265 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv94c\" (UniqueName: \"kubernetes.io/projected/47fb3222-46ee-4e02-9a2d-ecc944492a83-kube-api-access-gv94c\") pod \"packageserver-d55dfcdfc-mntxm\" (UID: \"47fb3222-46ee-4e02-9a2d-ecc944492a83\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mntxm" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.801332 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8snh8\" (UniqueName: \"kubernetes.io/projected/9ba81b6e-6d73-4dc4-90ba-4690b971f882-kube-api-access-8snh8\") pod \"collect-profiles-29501295-v5q6w\" (UID: \"9ba81b6e-6d73-4dc4-90ba-4690b971f882\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501295-v5q6w" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.801351 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48wc9\" (UniqueName: \"kubernetes.io/projected/3f392f5f-bc9d-4730-ba92-c9083f55f6e9-kube-api-access-48wc9\") pod \"olm-operator-6b444d44fb-4ptdj\" (UID: \"3f392f5f-bc9d-4730-ba92-c9083f55f6e9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4ptdj" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.801372 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2953b651-522d-4749-8416-b2c1922c26b3-signing-cabundle\") pod \"service-ca-9c57cc56f-lmn6t\" (UID: \"2953b651-522d-4749-8416-b2c1922c26b3\") " pod="openshift-service-ca/service-ca-9c57cc56f-lmn6t" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.801394 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2953b651-522d-4749-8416-b2c1922c26b3-signing-key\") pod \"service-ca-9c57cc56f-lmn6t\" (UID: \"2953b651-522d-4749-8416-b2c1922c26b3\") " pod="openshift-service-ca/service-ca-9c57cc56f-lmn6t" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.801425 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1e0d80fe-a3b6-4d24-9353-fe1ed0df2da5-certs\") pod \"machine-config-server-jxs7f\" (UID: \"1e0d80fe-a3b6-4d24-9353-fe1ed0df2da5\") " pod="openshift-machine-config-operator/machine-config-server-jxs7f" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.801456 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/81abe77a-25d5-4fe5-a592-e83853be1b63-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9xz4k\" (UID: \"81abe77a-25d5-4fe5-a592-e83853be1b63\") " pod="openshift-marketplace/marketplace-operator-79b997595-9xz4k" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.801481 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/67a133bc-8ad5-4088-b5ec-122ec4f32c4d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-t2s9r\" (UID: \"67a133bc-8ad5-4088-b5ec-122ec4f32c4d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t2s9r" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.801507 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9c9866e-cbf6-47d0-91f1-25412166eb4f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9v9rv\" (UID: \"c9c9866e-cbf6-47d0-91f1-25412166eb4f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9v9rv" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.801526 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f0fa30af-feba-4653-8b34-bfc34eed90da-proxy-tls\") pod \"machine-config-controller-84d6567774-lzsmv\" (UID: \"f0fa30af-feba-4653-8b34-bfc34eed90da\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lzsmv" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.801546 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3f392f5f-bc9d-4730-ba92-c9083f55f6e9-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4ptdj\" (UID: \"3f392f5f-bc9d-4730-ba92-c9083f55f6e9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4ptdj" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.801572 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4af79305-dc34-45ee-a27d-f8a6f08ab9c4-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-bwvjw\" (UID: \"4af79305-dc34-45ee-a27d-f8a6f08ab9c4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bwvjw" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.801589 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5ea79d8a-686d-411b-8996-82a2e2a669fe-trusted-ca\") pod \"ingress-operator-5b745b69d9-8p8fg\" (UID: \"5ea79d8a-686d-411b-8996-82a2e2a669fe\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8p8fg" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.801612 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/47fb3222-46ee-4e02-9a2d-ecc944492a83-webhook-cert\") pod \"packageserver-d55dfcdfc-mntxm\" (UID: \"47fb3222-46ee-4e02-9a2d-ecc944492a83\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mntxm" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.801633 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1e0d80fe-a3b6-4d24-9353-fe1ed0df2da5-node-bootstrap-token\") pod \"machine-config-server-jxs7f\" (UID: \"1e0d80fe-a3b6-4d24-9353-fe1ed0df2da5\") " pod="openshift-machine-config-operator/machine-config-server-jxs7f" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.801760 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ba81b6e-6d73-4dc4-90ba-4690b971f882-config-volume\") pod \"collect-profiles-29501295-v5q6w\" (UID: \"9ba81b6e-6d73-4dc4-90ba-4690b971f882\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501295-v5q6w" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.801830 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00003ad4-3d39-4600-ac7e-11b35b3d67c1-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-7hf94\" (UID: \"00003ad4-3d39-4600-ac7e-11b35b3d67c1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7hf94" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.801850 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4d4x\" (UniqueName: \"kubernetes.io/projected/00003ad4-3d39-4600-ac7e-11b35b3d67c1-kube-api-access-q4d4x\") pod \"openshift-controller-manager-operator-756b6f6bc6-7hf94\" (UID: \"00003ad4-3d39-4600-ac7e-11b35b3d67c1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7hf94" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.801868 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/81abe77a-25d5-4fe5-a592-e83853be1b63-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9xz4k\" (UID: \"81abe77a-25d5-4fe5-a592-e83853be1b63\") " pod="openshift-marketplace/marketplace-operator-79b997595-9xz4k" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.801903 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hpxb\" (UniqueName: \"kubernetes.io/projected/0480edbf-2598-4ac3-ab98-57a7e58f8cc0-kube-api-access-7hpxb\") pod \"kube-storage-version-migrator-operator-b67b599dd-7hhdh\" (UID: \"0480edbf-2598-4ac3-ab98-57a7e58f8cc0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7hhdh" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.801940 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66b52bf0-b7fc-4518-a958-64995f5c00b0-config-volume\") pod \"dns-default-ckdgn\" (UID: \"66b52bf0-b7fc-4518-a958-64995f5c00b0\") " pod="openshift-dns/dns-default-ckdgn" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.801961 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ba81b6e-6d73-4dc4-90ba-4690b971f882-secret-volume\") pod \"collect-profiles-29501295-v5q6w\" (UID: \"9ba81b6e-6d73-4dc4-90ba-4690b971f882\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501295-v5q6w" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.801980 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dfea4e8f-4296-4aad-97b3-0d571e9c527c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-99jq8\" (UID: \"dfea4e8f-4296-4aad-97b3-0d571e9c527c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-99jq8" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.802001 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg8wl\" (UniqueName: \"kubernetes.io/projected/dfea4e8f-4296-4aad-97b3-0d571e9c527c-kube-api-access-tg8wl\") pod \"machine-config-operator-74547568cd-99jq8\" (UID: \"dfea4e8f-4296-4aad-97b3-0d571e9c527c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-99jq8" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.802022 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5ea79d8a-686d-411b-8996-82a2e2a669fe-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8p8fg\" (UID: \"5ea79d8a-686d-411b-8996-82a2e2a669fe\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8p8fg" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.802048 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hth4h\" (UniqueName: \"kubernetes.io/projected/a8b814f1-a5f8-4349-8695-deb0ef824ff8-kube-api-access-hth4h\") pod \"service-ca-operator-777779d784-lk9zk\" (UID: \"a8b814f1-a5f8-4349-8695-deb0ef824ff8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lk9zk" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.802286 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f9310c18-e995-4f6b-b49a-c0cd2f574506-plugins-dir\") pod \"csi-hostpathplugin-rsdzb\" (UID: \"f9310c18-e995-4f6b-b49a-c0cd2f574506\") " pod="hostpath-provisioner/csi-hostpathplugin-rsdzb" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.802797 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f0fa30af-feba-4653-8b34-bfc34eed90da-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lzsmv\" (UID: \"f0fa30af-feba-4653-8b34-bfc34eed90da\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lzsmv" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.805642 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ba81b6e-6d73-4dc4-90ba-4690b971f882-config-volume\") pod \"collect-profiles-29501295-v5q6w\" (UID: \"9ba81b6e-6d73-4dc4-90ba-4690b971f882\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501295-v5q6w" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.805708 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2953b651-522d-4749-8416-b2c1922c26b3-signing-cabundle\") pod \"service-ca-9c57cc56f-lmn6t\" (UID: \"2953b651-522d-4749-8416-b2c1922c26b3\") " pod="openshift-service-ca/service-ca-9c57cc56f-lmn6t" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.806342 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dfea4e8f-4296-4aad-97b3-0d571e9c527c-images\") pod \"machine-config-operator-74547568cd-99jq8\" (UID: \"dfea4e8f-4296-4aad-97b3-0d571e9c527c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-99jq8" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.807778 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8b814f1-a5f8-4349-8695-deb0ef824ff8-config\") pod \"service-ca-operator-777779d784-lk9zk\" (UID: \"a8b814f1-a5f8-4349-8695-deb0ef824ff8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lk9zk" Feb 03 00:17:29 crc kubenswrapper[4798]: E0203 00:17:29.807891 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:17:30.30786769 +0000 UTC m=+142.073857881 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.808234 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00003ad4-3d39-4600-ac7e-11b35b3d67c1-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-7hf94\" (UID: \"00003ad4-3d39-4600-ac7e-11b35b3d67c1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7hf94" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.809843 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0480edbf-2598-4ac3-ab98-57a7e58f8cc0-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-7hhdh\" (UID: \"0480edbf-2598-4ac3-ab98-57a7e58f8cc0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7hhdh" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.810004 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eb78bcb9-c424-4592-99c4-e1a4d711d81b-cert\") pod \"ingress-canary-5wrch\" (UID: \"eb78bcb9-c424-4592-99c4-e1a4d711d81b\") " pod="openshift-ingress-canary/ingress-canary-5wrch" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.810963 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f9310c18-e995-4f6b-b49a-c0cd2f574506-csi-data-dir\") pod \"csi-hostpathplugin-rsdzb\" (UID: \"f9310c18-e995-4f6b-b49a-c0cd2f574506\") " pod="hostpath-provisioner/csi-hostpathplugin-rsdzb" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.811427 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3f392f5f-bc9d-4730-ba92-c9083f55f6e9-srv-cert\") pod \"olm-operator-6b444d44fb-4ptdj\" (UID: \"3f392f5f-bc9d-4730-ba92-c9083f55f6e9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4ptdj" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.811685 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a5b455ef-fa69-453a-9c72-6e952faae9db-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9vrvn\" (UID: \"a5b455ef-fa69-453a-9c72-6e952faae9db\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9vrvn" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.812096 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5ea79d8a-686d-411b-8996-82a2e2a669fe-metrics-tls\") pod \"ingress-operator-5b745b69d9-8p8fg\" (UID: \"5ea79d8a-686d-411b-8996-82a2e2a669fe\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8p8fg" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.812562 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4af79305-dc34-45ee-a27d-f8a6f08ab9c4-config\") pod \"kube-controller-manager-operator-78b949d7b-bwvjw\" (UID: \"4af79305-dc34-45ee-a27d-f8a6f08ab9c4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bwvjw" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.813090 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dfea4e8f-4296-4aad-97b3-0d571e9c527c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-99jq8\" (UID: \"dfea4e8f-4296-4aad-97b3-0d571e9c527c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-99jq8" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.813521 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/47fb3222-46ee-4e02-9a2d-ecc944492a83-tmpfs\") pod \"packageserver-d55dfcdfc-mntxm\" (UID: \"47fb3222-46ee-4e02-9a2d-ecc944492a83\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mntxm" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.814017 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/67a133bc-8ad5-4088-b5ec-122ec4f32c4d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-t2s9r\" (UID: \"67a133bc-8ad5-4088-b5ec-122ec4f32c4d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t2s9r" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.815116 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cc175a81-0556-44fd-95b9-1ec1e9af0f69-profile-collector-cert\") pod \"catalog-operator-68c6474976-p5f22\" (UID: \"cc175a81-0556-44fd-95b9-1ec1e9af0f69\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p5f22" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.815579 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8b814f1-a5f8-4349-8695-deb0ef824ff8-serving-cert\") pod \"service-ca-operator-777779d784-lk9zk\" (UID: \"a8b814f1-a5f8-4349-8695-deb0ef824ff8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lk9zk" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.816049 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1e0d80fe-a3b6-4d24-9353-fe1ed0df2da5-certs\") pod \"machine-config-server-jxs7f\" (UID: \"1e0d80fe-a3b6-4d24-9353-fe1ed0df2da5\") " pod="openshift-machine-config-operator/machine-config-server-jxs7f" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.816963 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00003ad4-3d39-4600-ac7e-11b35b3d67c1-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-7hf94\" (UID: \"00003ad4-3d39-4600-ac7e-11b35b3d67c1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7hf94" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.817126 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/47fb3222-46ee-4e02-9a2d-ecc944492a83-webhook-cert\") pod \"packageserver-d55dfcdfc-mntxm\" (UID: \"47fb3222-46ee-4e02-9a2d-ecc944492a83\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mntxm" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.817219 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/47fb3222-46ee-4e02-9a2d-ecc944492a83-apiservice-cert\") pod \"packageserver-d55dfcdfc-mntxm\" (UID: \"47fb3222-46ee-4e02-9a2d-ecc944492a83\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mntxm" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.817872 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2953b651-522d-4749-8416-b2c1922c26b3-signing-key\") pod \"service-ca-9c57cc56f-lmn6t\" (UID: \"2953b651-522d-4749-8416-b2c1922c26b3\") " pod="openshift-service-ca/service-ca-9c57cc56f-lmn6t" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.817887 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f0fa30af-feba-4653-8b34-bfc34eed90da-proxy-tls\") pod \"machine-config-controller-84d6567774-lzsmv\" (UID: \"f0fa30af-feba-4653-8b34-bfc34eed90da\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lzsmv" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.819587 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/81abe77a-25d5-4fe5-a592-e83853be1b63-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9xz4k\" (UID: \"81abe77a-25d5-4fe5-a592-e83853be1b63\") " pod="openshift-marketplace/marketplace-operator-79b997595-9xz4k" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.820369 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r96dl\" (UniqueName: \"kubernetes.io/projected/e80fb785-eead-4ebd-9e0c-0f5c548c257c-kube-api-access-r96dl\") pod \"downloads-7954f5f757-k8pqq\" (UID: \"e80fb785-eead-4ebd-9e0c-0f5c548c257c\") " pod="openshift-console/downloads-7954f5f757-k8pqq" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.820702 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cc175a81-0556-44fd-95b9-1ec1e9af0f69-srv-cert\") pod \"catalog-operator-68c6474976-p5f22\" (UID: \"cc175a81-0556-44fd-95b9-1ec1e9af0f69\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p5f22" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.820849 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9c9866e-cbf6-47d0-91f1-25412166eb4f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9v9rv\" (UID: \"c9c9866e-cbf6-47d0-91f1-25412166eb4f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9v9rv" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.821129 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66b52bf0-b7fc-4518-a958-64995f5c00b0-config-volume\") pod \"dns-default-ckdgn\" (UID: \"66b52bf0-b7fc-4518-a958-64995f5c00b0\") " pod="openshift-dns/dns-default-ckdgn" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.821329 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4af79305-dc34-45ee-a27d-f8a6f08ab9c4-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-bwvjw\" (UID: \"4af79305-dc34-45ee-a27d-f8a6f08ab9c4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bwvjw" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.821629 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ba81b6e-6d73-4dc4-90ba-4690b971f882-secret-volume\") pod \"collect-profiles-29501295-v5q6w\" (UID: \"9ba81b6e-6d73-4dc4-90ba-4690b971f882\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501295-v5q6w" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.822496 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0480edbf-2598-4ac3-ab98-57a7e58f8cc0-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-7hhdh\" (UID: \"0480edbf-2598-4ac3-ab98-57a7e58f8cc0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7hhdh" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.823066 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1e0d80fe-a3b6-4d24-9353-fe1ed0df2da5-node-bootstrap-token\") pod \"machine-config-server-jxs7f\" (UID: \"1e0d80fe-a3b6-4d24-9353-fe1ed0df2da5\") " pod="openshift-machine-config-operator/machine-config-server-jxs7f" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.823245 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/66b52bf0-b7fc-4518-a958-64995f5c00b0-metrics-tls\") pod \"dns-default-ckdgn\" (UID: \"66b52bf0-b7fc-4518-a958-64995f5c00b0\") " pod="openshift-dns/dns-default-ckdgn" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.825251 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/81abe77a-25d5-4fe5-a592-e83853be1b63-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9xz4k\" (UID: \"81abe77a-25d5-4fe5-a592-e83853be1b63\") " pod="openshift-marketplace/marketplace-operator-79b997595-9xz4k" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.825313 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5ea79d8a-686d-411b-8996-82a2e2a669fe-trusted-ca\") pod \"ingress-operator-5b745b69d9-8p8fg\" (UID: \"5ea79d8a-686d-411b-8996-82a2e2a669fe\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8p8fg" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.826414 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3f392f5f-bc9d-4730-ba92-c9083f55f6e9-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4ptdj\" (UID: \"3f392f5f-bc9d-4730-ba92-c9083f55f6e9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4ptdj" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.826732 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-mqqj9"] Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.828063 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dfea4e8f-4296-4aad-97b3-0d571e9c527c-proxy-tls\") pod \"machine-config-operator-74547568cd-99jq8\" (UID: \"dfea4e8f-4296-4aad-97b3-0d571e9c527c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-99jq8" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.832205 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0dc408c-7d0c-4c18-a500-bcffb273b616-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-nq4gt\" (UID: \"e0dc408c-7d0c-4c18-a500-bcffb273b616\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nq4gt" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.844009 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzcls\" (UniqueName: \"kubernetes.io/projected/691167fd-4218-4be3-bd41-39486e614ab4-kube-api-access-dzcls\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.866885 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-k8pqq" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.867972 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmbbx\" (UniqueName: \"kubernetes.io/projected/46425fd6-9499-4e9a-8450-3fafbe2c6611-kube-api-access-fmbbx\") pod \"route-controller-manager-6576b87f9c-6pxdl\" (UID: \"46425fd6-9499-4e9a-8450-3fafbe2c6611\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6pxdl" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.875850 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-r8z8z" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.882638 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cxg8\" (UniqueName: \"kubernetes.io/projected/11dc6c8b-70db-4d88-8792-a4a100330c8a-kube-api-access-2cxg8\") pod \"authentication-operator-69f744f599-sw2b4\" (UID: \"11dc6c8b-70db-4d88-8792-a4a100330c8a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sw2b4" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.903275 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:29 crc kubenswrapper[4798]: E0203 00:17:29.903739 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 00:17:30.403726204 +0000 UTC m=+142.169716215 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxw86" (UID: "691167fd-4218-4be3-bd41-39486e614ab4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.904123 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ddfa4269-a259-466c-b0fd-be3ae32849ee-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gtv4l\" (UID: \"ddfa4269-a259-466c-b0fd-be3ae32849ee\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gtv4l" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.904128 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zgqjc" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.913012 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jth29" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.922975 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/691167fd-4218-4be3-bd41-39486e614ab4-bound-sa-token\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.938810 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gtv4l" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.943906 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-th92f" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.951415 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbpdl\" (UniqueName: \"kubernetes.io/projected/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-kube-api-access-hbpdl\") pod \"oauth-openshift-558db77b4-7m7t5\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.966720 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nq4gt" Feb 03 00:17:29 crc kubenswrapper[4798]: I0203 00:17:29.967317 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m69v\" (UniqueName: \"kubernetes.io/projected/b35482f6-84f0-4bbc-8bd4-2787b5912589-kube-api-access-5m69v\") pod \"dns-operator-744455d44c-nwr6q\" (UID: \"b35482f6-84f0-4bbc-8bd4-2787b5912589\") " pod="openshift-dns-operator/dns-operator-744455d44c-nwr6q" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.001976 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qsvlx"] Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.004456 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:17:30 crc kubenswrapper[4798]: E0203 00:17:30.005108 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:17:30.505082803 +0000 UTC m=+142.271072814 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.013645 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd8vc\" (UniqueName: \"kubernetes.io/projected/ef7f31fe-b09b-4388-a076-1a287c202292-kube-api-access-fd8vc\") pod \"image-pruner-29501280-nhgw5\" (UID: \"ef7f31fe-b09b-4388-a076-1a287c202292\") " pod="openshift-image-registry/image-pruner-29501280-nhgw5" Feb 03 00:17:30 crc kubenswrapper[4798]: W0203 00:17:30.023071 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded376d37_629a_48fd_81f9_218864e9b711.slice/crio-5b612751b9297fd586043f54554e262d508e68f8759315c3695aac60bd56a380 WatchSource:0}: Error finding container 5b612751b9297fd586043f54554e262d508e68f8759315c3695aac60bd56a380: Status 404 returned error can't find the container with id 5b612751b9297fd586043f54554e262d508e68f8759315c3695aac60bd56a380 Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.027761 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx9v9\" (UniqueName: \"kubernetes.io/projected/a963b867-b963-4b9e-abe9-47088ff98bea-kube-api-access-gx9v9\") pod \"router-default-5444994796-4c8fd\" (UID: \"a963b867-b963-4b9e-abe9-47088ff98bea\") " pod="openshift-ingress/router-default-5444994796-4c8fd" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.058908 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvk7w\" (UniqueName: \"kubernetes.io/projected/8ad9a195-3fae-4ba2-a2c6-577888216124-kube-api-access-wvk7w\") pod \"cluster-image-registry-operator-dc59b4c8b-8gmh8\" (UID: \"8ad9a195-3fae-4ba2-a2c6-577888216124\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8gmh8" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.067274 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8t2w\" (UniqueName: \"kubernetes.io/projected/f0fa30af-feba-4653-8b34-bfc34eed90da-kube-api-access-d8t2w\") pod \"machine-config-controller-84d6567774-lzsmv\" (UID: \"f0fa30af-feba-4653-8b34-bfc34eed90da\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lzsmv" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.086349 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z88z\" (UniqueName: \"kubernetes.io/projected/67a133bc-8ad5-4088-b5ec-122ec4f32c4d-kube-api-access-8z88z\") pod \"control-plane-machine-set-operator-78cbb6b69f-t2s9r\" (UID: \"67a133bc-8ad5-4088-b5ec-122ec4f32c4d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t2s9r" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.105952 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-nwr6q" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.107728 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:30 crc kubenswrapper[4798]: E0203 00:17:30.108338 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 00:17:30.608320709 +0000 UTC m=+142.374310790 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxw86" (UID: "691167fd-4218-4be3-bd41-39486e614ab4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.108802 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5ll8\" (UniqueName: \"kubernetes.io/projected/a5b455ef-fa69-453a-9c72-6e952faae9db-kube-api-access-f5ll8\") pod \"multus-admission-controller-857f4d67dd-9vrvn\" (UID: \"a5b455ef-fa69-453a-9c72-6e952faae9db\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9vrvn" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.135299 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hth4h\" (UniqueName: \"kubernetes.io/projected/a8b814f1-a5f8-4349-8695-deb0ef824ff8-kube-api-access-hth4h\") pod \"service-ca-operator-777779d784-lk9zk\" (UID: \"a8b814f1-a5f8-4349-8695-deb0ef824ff8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lk9zk" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.142014 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6pxdl" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.151608 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdj4v\" (UniqueName: \"kubernetes.io/projected/a2481f1c-dd12-4275-be76-110b4ad35541-kube-api-access-vdj4v\") pod \"migrator-59844c95c7-qdl4h\" (UID: \"a2481f1c-dd12-4275-be76-110b4ad35541\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdl4h" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.160967 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-sw2b4" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.163803 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzczm\" (UniqueName: \"kubernetes.io/projected/cc175a81-0556-44fd-95b9-1ec1e9af0f69-kube-api-access-qzczm\") pod \"catalog-operator-68c6474976-p5f22\" (UID: \"cc175a81-0556-44fd-95b9-1ec1e9af0f69\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p5f22" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.185469 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv94c\" (UniqueName: \"kubernetes.io/projected/47fb3222-46ee-4e02-9a2d-ecc944492a83-kube-api-access-gv94c\") pod \"packageserver-d55dfcdfc-mntxm\" (UID: \"47fb3222-46ee-4e02-9a2d-ecc944492a83\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mntxm" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.185954 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29501280-nhgw5" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.211614 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:17:30 crc kubenswrapper[4798]: E0203 00:17:30.212053 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:17:30.712037077 +0000 UTC m=+142.478027078 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.213983 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.215610 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8snh8\" (UniqueName: \"kubernetes.io/projected/9ba81b6e-6d73-4dc4-90ba-4690b971f882-kube-api-access-8snh8\") pod \"collect-profiles-29501295-v5q6w\" (UID: \"9ba81b6e-6d73-4dc4-90ba-4690b971f882\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501295-v5q6w" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.230720 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8gmh8" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.231219 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48wc9\" (UniqueName: \"kubernetes.io/projected/3f392f5f-bc9d-4730-ba92-c9083f55f6e9-kube-api-access-48wc9\") pod \"olm-operator-6b444d44fb-4ptdj\" (UID: \"3f392f5f-bc9d-4730-ba92-c9083f55f6e9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4ptdj" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.246839 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4d4x\" (UniqueName: \"kubernetes.io/projected/00003ad4-3d39-4600-ac7e-11b35b3d67c1-kube-api-access-q4d4x\") pod \"openshift-controller-manager-operator-756b6f6bc6-7hf94\" (UID: \"00003ad4-3d39-4600-ac7e-11b35b3d67c1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7hf94" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.266782 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-4c8fd" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.273115 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lzsmv" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.274410 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6zjp\" (UniqueName: \"kubernetes.io/projected/c9c9866e-cbf6-47d0-91f1-25412166eb4f-kube-api-access-f6zjp\") pod \"package-server-manager-789f6589d5-9v9rv\" (UID: \"c9c9866e-cbf6-47d0-91f1-25412166eb4f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9v9rv" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.279234 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-k8pqq"] Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.284820 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flz4q\" (UniqueName: \"kubernetes.io/projected/f9310c18-e995-4f6b-b49a-c0cd2f574506-kube-api-access-flz4q\") pod \"csi-hostpathplugin-rsdzb\" (UID: \"f9310c18-e995-4f6b-b49a-c0cd2f574506\") " pod="hostpath-provisioner/csi-hostpathplugin-rsdzb" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.297812 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7hf94" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.304194 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89gt9\" (UniqueName: \"kubernetes.io/projected/66b52bf0-b7fc-4518-a958-64995f5c00b0-kube-api-access-89gt9\") pod \"dns-default-ckdgn\" (UID: \"66b52bf0-b7fc-4518-a958-64995f5c00b0\") " pod="openshift-dns/dns-default-ckdgn" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.313334 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:30 crc kubenswrapper[4798]: E0203 00:17:30.314205 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 00:17:30.814172415 +0000 UTC m=+142.580162426 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxw86" (UID: "691167fd-4218-4be3-bd41-39486e614ab4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.319819 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t2s9r" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.332112 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gtv4l"] Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.333209 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mntxm" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.337741 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdl4h" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.339417 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk6js\" (UniqueName: \"kubernetes.io/projected/1e0d80fe-a3b6-4d24-9353-fe1ed0df2da5-kube-api-access-qk6js\") pod \"machine-config-server-jxs7f\" (UID: \"1e0d80fe-a3b6-4d24-9353-fe1ed0df2da5\") " pod="openshift-machine-config-operator/machine-config-server-jxs7f" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.363010 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4ptdj" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.373549 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pbvl\" (UniqueName: \"kubernetes.io/projected/eb78bcb9-c424-4592-99c4-e1a4d711d81b-kube-api-access-8pbvl\") pod \"ingress-canary-5wrch\" (UID: \"eb78bcb9-c424-4592-99c4-e1a4d711d81b\") " pod="openshift-ingress-canary/ingress-canary-5wrch" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.377395 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lk9zk" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.380379 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktzlp\" (UniqueName: \"kubernetes.io/projected/5ea79d8a-686d-411b-8996-82a2e2a669fe-kube-api-access-ktzlp\") pod \"ingress-operator-5b745b69d9-8p8fg\" (UID: \"5ea79d8a-686d-411b-8996-82a2e2a669fe\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8p8fg" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.383622 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p5f22" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.395189 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501295-v5q6w" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.406101 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-9vrvn" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.416306 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:17:30 crc kubenswrapper[4798]: E0203 00:17:30.416898 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:17:30.916879758 +0000 UTC m=+142.682869769 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.426607 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9v9rv" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.442369 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-ckdgn" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.447074 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-rsdzb" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.447866 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-r8z8z"] Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.453340 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd7rw\" (UniqueName: \"kubernetes.io/projected/81abe77a-25d5-4fe5-a592-e83853be1b63-kube-api-access-wd7rw\") pod \"marketplace-operator-79b997595-9xz4k\" (UID: \"81abe77a-25d5-4fe5-a592-e83853be1b63\") " pod="openshift-marketplace/marketplace-operator-79b997595-9xz4k" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.453985 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hpxb\" (UniqueName: \"kubernetes.io/projected/0480edbf-2598-4ac3-ab98-57a7e58f8cc0-kube-api-access-7hpxb\") pod \"kube-storage-version-migrator-operator-b67b599dd-7hhdh\" (UID: \"0480edbf-2598-4ac3-ab98-57a7e58f8cc0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7hhdh" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.454056 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zgqjc"] Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.461031 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5wrch" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.469550 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-jxs7f" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.473717 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4af79305-dc34-45ee-a27d-f8a6f08ab9c4-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-bwvjw\" (UID: \"4af79305-dc34-45ee-a27d-f8a6f08ab9c4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bwvjw" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.482603 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nq4gt"] Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.490370 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gnjq\" (UniqueName: \"kubernetes.io/projected/2953b651-522d-4749-8416-b2c1922c26b3-kube-api-access-9gnjq\") pod \"service-ca-9c57cc56f-lmn6t\" (UID: \"2953b651-522d-4749-8416-b2c1922c26b3\") " pod="openshift-service-ca/service-ca-9c57cc56f-lmn6t" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.490516 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg8wl\" (UniqueName: \"kubernetes.io/projected/dfea4e8f-4296-4aad-97b3-0d571e9c527c-kube-api-access-tg8wl\") pod \"machine-config-operator-74547568cd-99jq8\" (UID: \"dfea4e8f-4296-4aad-97b3-0d571e9c527c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-99jq8" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.505528 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5ea79d8a-686d-411b-8996-82a2e2a669fe-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8p8fg\" (UID: \"5ea79d8a-686d-411b-8996-82a2e2a669fe\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8p8fg" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.519951 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-th92f"] Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.520683 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:30 crc kubenswrapper[4798]: E0203 00:17:30.521020 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 00:17:31.021005966 +0000 UTC m=+142.786995967 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxw86" (UID: "691167fd-4218-4be3-bd41-39486e614ab4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.522177 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jth29"] Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.571435 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-sw2b4"] Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.572353 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-nwr6q"] Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.581425 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-99jq8" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.589760 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8p8fg" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.605090 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7hhdh" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.626068 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bwvjw" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.626674 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:17:30 crc kubenswrapper[4798]: E0203 00:17:30.626816 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:17:31.126796205 +0000 UTC m=+142.892786216 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.627052 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:30 crc kubenswrapper[4798]: E0203 00:17:30.627449 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 00:17:31.127427611 +0000 UTC m=+142.893417802 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxw86" (UID: "691167fd-4218-4be3-bd41-39486e614ab4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.627779 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29501280-nhgw5"] Feb 03 00:17:30 crc kubenswrapper[4798]: W0203 00:17:30.639307 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0dc408c_7d0c_4c18_a500_bcffb273b616.slice/crio-1c3c7207dcde32f953d8ccd01dae6f983455e5f2c93de5450ac0c2c760a5ce5a WatchSource:0}: Error finding container 1c3c7207dcde32f953d8ccd01dae6f983455e5f2c93de5450ac0c2c760a5ce5a: Status 404 returned error can't find the container with id 1c3c7207dcde32f953d8ccd01dae6f983455e5f2c93de5450ac0c2c760a5ce5a Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.642908 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9xz4k" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.649586 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-lmn6t" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.668885 4798 csr.go:261] certificate signing request csr-7frzv is approved, waiting to be issued Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.674614 4798 csr.go:257] certificate signing request csr-7frzv is issued Feb 03 00:17:30 crc kubenswrapper[4798]: W0203 00:17:30.685344 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fa10d48_6402_4828_a366_1a9ea825a444.slice/crio-0d1bb7676f75835fa07162da657a07777c5c03ec865ac47e0b8b3dfca65c80a2 WatchSource:0}: Error finding container 0d1bb7676f75835fa07162da657a07777c5c03ec865ac47e0b8b3dfca65c80a2: Status 404 returned error can't find the container with id 0d1bb7676f75835fa07162da657a07777c5c03ec865ac47e0b8b3dfca65c80a2 Feb 03 00:17:30 crc kubenswrapper[4798]: W0203 00:17:30.716073 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef7f31fe_b09b_4388_a076_1a287c202292.slice/crio-a1f6aea604c50a0037d3180ba7baf54dbbe9d31330a0088bef59c433e2b2b619 WatchSource:0}: Error finding container a1f6aea604c50a0037d3180ba7baf54dbbe9d31330a0088bef59c433e2b2b619: Status 404 returned error can't find the container with id a1f6aea604c50a0037d3180ba7baf54dbbe9d31330a0088bef59c433e2b2b619 Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.724486 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qsvlx" event={"ID":"ed376d37-629a-48fd-81f9-218864e9b711","Type":"ContainerStarted","Data":"a28f6a18873ff6034543bf447083b105435c7e8e245d531a1a25820837e7bf97"} Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.724560 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qsvlx" event={"ID":"ed376d37-629a-48fd-81f9-218864e9b711","Type":"ContainerStarted","Data":"5b612751b9297fd586043f54554e262d508e68f8759315c3695aac60bd56a380"} Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.728596 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:17:30 crc kubenswrapper[4798]: E0203 00:17:30.728923 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:17:31.228894582 +0000 UTC m=+142.994884593 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.729021 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:30 crc kubenswrapper[4798]: E0203 00:17:30.729437 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 00:17:31.229420936 +0000 UTC m=+142.995410947 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxw86" (UID: "691167fd-4218-4be3-bd41-39486e614ab4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.742066 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-k8pqq" event={"ID":"e80fb785-eead-4ebd-9e0c-0f5c548c257c","Type":"ContainerStarted","Data":"bb4b736a012d36d0321209182241afea1648a0ea2f305b9936c0febd8b1ff3d3"} Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.751079 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-th92f" event={"ID":"841ea4a0-8ab8-4a72-96a8-40578497e9c4","Type":"ContainerStarted","Data":"7258890c030f191b42236f515c686ac4275b9f62db55410243e21f0a2d6bf208"} Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.783868 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-4c8fd" event={"ID":"a963b867-b963-4b9e-abe9-47088ff98bea","Type":"ContainerStarted","Data":"ce71646c347c8a4625fc9cd04786cb4d7d39c3f556e8d801eaa88b38fc98f29e"} Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.791023 4798 generic.go:334] "Generic (PLEG): container finished" podID="b86e799d-e230-4159-9a60-a92b5caee0fa" containerID="bf3b0900c22b2bd8dec29bd624d29ba60c64777e0747bafb2cf063bcc4928c74" exitCode=0 Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.791119 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dz2lb" event={"ID":"b86e799d-e230-4159-9a60-a92b5caee0fa","Type":"ContainerDied","Data":"bf3b0900c22b2bd8dec29bd624d29ba60c64777e0747bafb2cf063bcc4928c74"} Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.791166 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dz2lb" event={"ID":"b86e799d-e230-4159-9a60-a92b5caee0fa","Type":"ContainerStarted","Data":"30a8ee325ce756aefff9bf849e123e288618b938f5fc264739f2706aa4a0c0f4"} Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.795228 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-8tbx2" event={"ID":"ed0d46a1-22a9-46aa-b0d8-c65624861d9a","Type":"ContainerStarted","Data":"d9589cd54df91a59597cb1f2246a2a25990a1830cc45e9a2b2ae3bdd8ad17f08"} Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.795258 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-8tbx2" event={"ID":"ed0d46a1-22a9-46aa-b0d8-c65624861d9a","Type":"ContainerStarted","Data":"6059851ec51640a28a5caac424d9a877994cfe895843c0912f167e8da69f9240"} Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.795268 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-8tbx2" event={"ID":"ed0d46a1-22a9-46aa-b0d8-c65624861d9a","Type":"ContainerStarted","Data":"1b02ba7e708b191858e3f930a10f04b59558a40983cf8ba835b7c5c2e7890ca7"} Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.797733 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-mqqj9" event={"ID":"dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f","Type":"ContainerStarted","Data":"becddff7e3489cf587051f2a81dd573853757ebc877ddc5bae7f8f2d0f8904da"} Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.797796 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-mqqj9" event={"ID":"dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f","Type":"ContainerStarted","Data":"56ac86716cf752c5d731584b6606d784cacdace187d1e71000c4a8df7a0a64af"} Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.809732 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tpkwn" event={"ID":"0116d04c-adc4-4adc-ab03-21058672d6e8","Type":"ContainerStarted","Data":"02d7498df0a89ea559b1312b3d81d273f3db8666294c417a2d17de215e5f4d77"} Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.809804 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tpkwn" event={"ID":"0116d04c-adc4-4adc-ab03-21058672d6e8","Type":"ContainerStarted","Data":"447985d593cd3de88977e310cdad52a4f5db74c1adad996c9062d1232e20fd97"} Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.810850 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-tpkwn" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.821932 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gtv4l" event={"ID":"ddfa4269-a259-466c-b0fd-be3ae32849ee","Type":"ContainerStarted","Data":"59136c6ba2805065bec4ed96bdc1081cfa83e35338039f8f792b537e949f901d"} Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.826131 4798 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-tpkwn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.826190 4798 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-tpkwn" podUID="0116d04c-adc4-4adc-ab03-21058672d6e8" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.828539 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-r8z8z" event={"ID":"83bfe431-00e7-4ce9-a648-6e5634edfd3f","Type":"ContainerStarted","Data":"0cb495b1aa7fe1d1ab3711cc684680b7109cc056e49c517b37bd258815f49378"} Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.829518 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jth29" event={"ID":"3fa10d48-6402-4828-a366-1a9ea825a444","Type":"ContainerStarted","Data":"0d1bb7676f75835fa07162da657a07777c5c03ec865ac47e0b8b3dfca65c80a2"} Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.831955 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zgqjc" event={"ID":"47d61fbc-8622-45c3-a0fd-b52320050c34","Type":"ContainerStarted","Data":"665cd7ba929747187d21f09bae15c2997d4f21acdcbdb1128560f993cdaa0c45"} Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.840030 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-6pxdl"] Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.840306 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-sw2b4" event={"ID":"11dc6c8b-70db-4d88-8792-a4a100330c8a","Type":"ContainerStarted","Data":"9b72c620919ea710a03315f7456a4bc586876f4eefb11ff6f3909129c892764d"} Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.840835 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:17:30 crc kubenswrapper[4798]: E0203 00:17:30.841007 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:17:31.340982097 +0000 UTC m=+143.106972108 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.841069 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:30 crc kubenswrapper[4798]: E0203 00:17:30.841412 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 00:17:31.341400128 +0000 UTC m=+143.107390199 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxw86" (UID: "691167fd-4218-4be3-bd41-39486e614ab4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.842056 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qdl4h"] Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.865092 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4ptdj"] Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.906554 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-nwr6q" event={"ID":"b35482f6-84f0-4bbc-8bd4-2787b5912589","Type":"ContainerStarted","Data":"49801a30ec126f85d6b5ad29cfa2bce04d5769ef41bee8d3419711b9718ab210"} Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.942154 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:17:30 crc kubenswrapper[4798]: E0203 00:17:30.942285 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:17:31.442265136 +0000 UTC m=+143.208255147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.943488 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-8tbx2" podStartSLOduration=121.943473455 podStartE2EDuration="2m1.943473455s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:30.932247527 +0000 UTC m=+142.698237548" watchObservedRunningTime="2026-02-03 00:17:30.943473455 +0000 UTC m=+142.709463466" Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.944247 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:30 crc kubenswrapper[4798]: E0203 00:17:30.948520 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 00:17:31.448497769 +0000 UTC m=+143.214487780 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxw86" (UID: "691167fd-4218-4be3-bd41-39486e614ab4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:30 crc kubenswrapper[4798]: W0203 00:17:30.951859 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e0d80fe_a3b6_4d24_9353_fe1ed0df2da5.slice/crio-9f2da50b787b5dbacd2c47d618d7e05fbcde77c12475ee07648b1a71be958b76 WatchSource:0}: Error finding container 9f2da50b787b5dbacd2c47d618d7e05fbcde77c12475ee07648b1a71be958b76: Status 404 returned error can't find the container with id 9f2da50b787b5dbacd2c47d618d7e05fbcde77c12475ee07648b1a71be958b76 Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.980254 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nq4gt" event={"ID":"e0dc408c-7d0c-4c18-a500-bcffb273b616","Type":"ContainerStarted","Data":"1c3c7207dcde32f953d8ccd01dae6f983455e5f2c93de5450ac0c2c760a5ce5a"} Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.980316 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8gmh8"] Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.980333 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7hf94"] Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.980344 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7m7t5"] Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.988815 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-28wct" event={"ID":"ff545dc5-468e-410c-aacb-2c26ef11274e","Type":"ContainerStarted","Data":"29c93bfb63cd115336b869df01e9eff6206830263ac1e7731de27c528db258c5"} Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.988967 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-28wct" event={"ID":"ff545dc5-468e-410c-aacb-2c26ef11274e","Type":"ContainerStarted","Data":"5abcf0bc96de8cc868f108b7934c65cbb8ecd3da03660be799d481fcc590cd47"} Feb 03 00:17:30 crc kubenswrapper[4798]: I0203 00:17:30.992015 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-lk9zk"] Feb 03 00:17:31 crc kubenswrapper[4798]: I0203 00:17:31.001571 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lzsmv"] Feb 03 00:17:31 crc kubenswrapper[4798]: W0203 00:17:31.018869 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46425fd6_9499_4e9a_8450_3fafbe2c6611.slice/crio-d75a8510ffa24831b912d8d89149db14e0220f16d9853f49aead6ee19bbb3113 WatchSource:0}: Error finding container d75a8510ffa24831b912d8d89149db14e0220f16d9853f49aead6ee19bbb3113: Status 404 returned error can't find the container with id d75a8510ffa24831b912d8d89149db14e0220f16d9853f49aead6ee19bbb3113 Feb 03 00:17:31 crc kubenswrapper[4798]: I0203 00:17:31.020954 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mntxm"] Feb 03 00:17:31 crc kubenswrapper[4798]: I0203 00:17:31.045531 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:17:31 crc kubenswrapper[4798]: E0203 00:17:31.048753 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:17:31.548718081 +0000 UTC m=+143.314708092 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:31 crc kubenswrapper[4798]: I0203 00:17:31.147716 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:31 crc kubenswrapper[4798]: E0203 00:17:31.148319 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 00:17:31.648306566 +0000 UTC m=+143.414296577 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxw86" (UID: "691167fd-4218-4be3-bd41-39486e614ab4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:31 crc kubenswrapper[4798]: I0203 00:17:31.249262 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:17:31 crc kubenswrapper[4798]: E0203 00:17:31.250031 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:17:31.750014104 +0000 UTC m=+143.516004105 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:31 crc kubenswrapper[4798]: I0203 00:17:31.351254 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:31 crc kubenswrapper[4798]: E0203 00:17:31.351963 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 00:17:31.851948908 +0000 UTC m=+143.617938919 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxw86" (UID: "691167fd-4218-4be3-bd41-39486e614ab4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:31 crc kubenswrapper[4798]: I0203 00:17:31.452356 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:17:31 crc kubenswrapper[4798]: E0203 00:17:31.452941 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:17:31.952922377 +0000 UTC m=+143.718912388 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:31 crc kubenswrapper[4798]: I0203 00:17:31.514870 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9vrvn"] Feb 03 00:17:31 crc kubenswrapper[4798]: I0203 00:17:31.531420 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-ckdgn"] Feb 03 00:17:31 crc kubenswrapper[4798]: I0203 00:17:31.547524 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t2s9r"] Feb 03 00:17:31 crc kubenswrapper[4798]: I0203 00:17:31.557692 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:31 crc kubenswrapper[4798]: E0203 00:17:31.558146 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 00:17:32.058119412 +0000 UTC m=+143.824109423 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxw86" (UID: "691167fd-4218-4be3-bd41-39486e614ab4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:31 crc kubenswrapper[4798]: I0203 00:17:31.664277 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:17:31 crc kubenswrapper[4798]: E0203 00:17:31.665067 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:17:32.165040919 +0000 UTC m=+143.931030930 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:31 crc kubenswrapper[4798]: I0203 00:17:31.674008 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501295-v5q6w"] Feb 03 00:17:31 crc kubenswrapper[4798]: I0203 00:17:31.683083 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-03 00:12:30 +0000 UTC, rotation deadline is 2026-11-01 04:48:51.264532632 +0000 UTC Feb 03 00:17:31 crc kubenswrapper[4798]: I0203 00:17:31.683143 4798 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6508h31m19.581391774s for next certificate rotation Feb 03 00:17:31 crc kubenswrapper[4798]: I0203 00:17:31.749866 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-mqqj9" podStartSLOduration=122.749849598 podStartE2EDuration="2m2.749849598s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:31.710185177 +0000 UTC m=+143.476175188" watchObservedRunningTime="2026-02-03 00:17:31.749849598 +0000 UTC m=+143.515839609" Feb 03 00:17:31 crc kubenswrapper[4798]: I0203 00:17:31.767578 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:31 crc kubenswrapper[4798]: E0203 00:17:31.768323 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 00:17:32.268219093 +0000 UTC m=+144.034209094 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxw86" (UID: "691167fd-4218-4be3-bd41-39486e614ab4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:31 crc kubenswrapper[4798]: I0203 00:17:31.776344 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rsdzb"] Feb 03 00:17:31 crc kubenswrapper[4798]: W0203 00:17:31.787186 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67a133bc_8ad5_4088_b5ec_122ec4f32c4d.slice/crio-8aaff168811c6de9bfe4fdd48d408966f7997417944df4889ac189506bd4b3bf WatchSource:0}: Error finding container 8aaff168811c6de9bfe4fdd48d408966f7997417944df4889ac189506bd4b3bf: Status 404 returned error can't find the container with id 8aaff168811c6de9bfe4fdd48d408966f7997417944df4889ac189506bd4b3bf Feb 03 00:17:31 crc kubenswrapper[4798]: I0203 00:17:31.810820 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p5f22"] Feb 03 00:17:31 crc kubenswrapper[4798]: I0203 00:17:31.870087 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:17:31 crc kubenswrapper[4798]: E0203 00:17:31.870610 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:17:32.370594978 +0000 UTC m=+144.136584979 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:31 crc kubenswrapper[4798]: I0203 00:17:31.871177 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-tpkwn" podStartSLOduration=122.871166122 podStartE2EDuration="2m2.871166122s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:31.869710526 +0000 UTC m=+143.635700537" watchObservedRunningTime="2026-02-03 00:17:31.871166122 +0000 UTC m=+143.637156133" Feb 03 00:17:31 crc kubenswrapper[4798]: I0203 00:17:31.890633 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bwvjw"] Feb 03 00:17:31 crc kubenswrapper[4798]: I0203 00:17:31.893477 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5wrch"] Feb 03 00:17:31 crc kubenswrapper[4798]: I0203 00:17:31.958199 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qsvlx" podStartSLOduration=122.958180196 podStartE2EDuration="2m2.958180196s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:31.950904596 +0000 UTC m=+143.716894607" watchObservedRunningTime="2026-02-03 00:17:31.958180196 +0000 UTC m=+143.724170207" Feb 03 00:17:31 crc kubenswrapper[4798]: I0203 00:17:31.960988 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lmn6t"] Feb 03 00:17:31 crc kubenswrapper[4798]: I0203 00:17:31.971248 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:31 crc kubenswrapper[4798]: E0203 00:17:31.972376 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 00:17:32.472364027 +0000 UTC m=+144.238354038 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxw86" (UID: "691167fd-4218-4be3-bd41-39486e614ab4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:31 crc kubenswrapper[4798]: W0203 00:17:31.989247 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4af79305_dc34_45ee_a27d_f8a6f08ab9c4.slice/crio-900b67e11285141dcb600fc0b19afb16114857713432b0a9d24b53db8743e526 WatchSource:0}: Error finding container 900b67e11285141dcb600fc0b19afb16114857713432b0a9d24b53db8743e526: Status 404 returned error can't find the container with id 900b67e11285141dcb600fc0b19afb16114857713432b0a9d24b53db8743e526 Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.072184 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:17:32 crc kubenswrapper[4798]: E0203 00:17:32.072541 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:17:32.572522347 +0000 UTC m=+144.338512358 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.074214 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8p8fg"] Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.080910 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t2s9r" event={"ID":"67a133bc-8ad5-4088-b5ec-122ec4f32c4d","Type":"ContainerStarted","Data":"8aaff168811c6de9bfe4fdd48d408966f7997417944df4889ac189506bd4b3bf"} Feb 03 00:17:32 crc kubenswrapper[4798]: W0203 00:17:32.082884 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2953b651_522d_4749_8416_b2c1922c26b3.slice/crio-80a61589d148c03de6a5cd713647a812aef9648bbafaafc84d1282145f8773ac WatchSource:0}: Error finding container 80a61589d148c03de6a5cd713647a812aef9648bbafaafc84d1282145f8773ac: Status 404 returned error can't find the container with id 80a61589d148c03de6a5cd713647a812aef9648bbafaafc84d1282145f8773ac Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.096233 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6pxdl" event={"ID":"46425fd6-9499-4e9a-8450-3fafbe2c6611","Type":"ContainerStarted","Data":"ef3f860bc643168979d68d02ced0629e3778569f2a0f9f1e18469839cb692859"} Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.096272 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6pxdl" event={"ID":"46425fd6-9499-4e9a-8450-3fafbe2c6611","Type":"ContainerStarted","Data":"d75a8510ffa24831b912d8d89149db14e0220f16d9853f49aead6ee19bbb3113"} Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.096901 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6pxdl" Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.108049 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-r8z8z" event={"ID":"83bfe431-00e7-4ce9-a648-6e5634edfd3f","Type":"ContainerStarted","Data":"762de5f0530bdea4125a2e8dda7f6ada5a55219705a9a64f8a6e2ee9008e31f3"} Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.109462 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-r8z8z" Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.116286 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9xz4k"] Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.122784 4798 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-6pxdl container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.122850 4798 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6pxdl" podUID="46425fd6-9499-4e9a-8450-3fafbe2c6611" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.122884 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501295-v5q6w" event={"ID":"9ba81b6e-6d73-4dc4-90ba-4690b971f882","Type":"ContainerStarted","Data":"689b0982bfd0220fa8f0868ffb9d0b5a5d11fa9ebbaf2ac27f8070b4c62dd67b"} Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.130507 4798 patch_prober.go:28] interesting pod/console-operator-58897d9998-r8z8z container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.130562 4798 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-r8z8z" podUID="83bfe431-00e7-4ce9-a648-6e5634edfd3f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.130641 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9vrvn" event={"ID":"a5b455ef-fa69-453a-9c72-6e952faae9db","Type":"ContainerStarted","Data":"0d2110c49df048b4066c5d3939e2b0a28d7d08922a0d1eb1ee0425cb40fa1829"} Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.135563 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9v9rv"] Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.171051 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29501280-nhgw5" event={"ID":"ef7f31fe-b09b-4388-a076-1a287c202292","Type":"ContainerStarted","Data":"23be4d8cbb0c6bd61a93f82865274d81bcf0e6073cc39acc7dc8afa2be409b03"} Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.171105 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29501280-nhgw5" event={"ID":"ef7f31fe-b09b-4388-a076-1a287c202292","Type":"ContainerStarted","Data":"a1f6aea604c50a0037d3180ba7baf54dbbe9d31330a0088bef59c433e2b2b619"} Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.178380 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:32 crc kubenswrapper[4798]: E0203 00:17:32.179928 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 00:17:32.679915206 +0000 UTC m=+144.445905217 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxw86" (UID: "691167fd-4218-4be3-bd41-39486e614ab4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.181821 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8gmh8" event={"ID":"8ad9a195-3fae-4ba2-a2c6-577888216124","Type":"ContainerStarted","Data":"69a106c4c903298a2177b02350adbbfc0f2abd9751756fc56eb7d5584797c666"} Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.181855 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8gmh8" event={"ID":"8ad9a195-3fae-4ba2-a2c6-577888216124","Type":"ContainerStarted","Data":"d72455f7e6f038d824a9959ce30f3db1a86259c3ef881c3071d04052a8cdada2"} Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.184242 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mntxm" event={"ID":"47fb3222-46ee-4e02-9a2d-ecc944492a83","Type":"ContainerStarted","Data":"bbfce0fa50ec870dcc8ae7cacba1550d742e35c71cd81294d43bf5e82c19c81c"} Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.184272 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mntxm" event={"ID":"47fb3222-46ee-4e02-9a2d-ecc944492a83","Type":"ContainerStarted","Data":"fe9cc406d658ce122afb8273e01e2e29adca2dcfa2c5f242f487f2f9dae3f46c"} Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.184956 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mntxm" Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.192841 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7hf94" event={"ID":"00003ad4-3d39-4600-ac7e-11b35b3d67c1","Type":"ContainerStarted","Data":"9b1b50b95c5e4588267cf79d72cf6f7f7ac41668c458422c3bf63b5b9d049d6d"} Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.196502 4798 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-mntxm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:5443/healthz\": dial tcp 10.217.0.21:5443: connect: connection refused" start-of-body= Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.196764 4798 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mntxm" podUID="47fb3222-46ee-4e02-9a2d-ecc944492a83" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.21:5443/healthz\": dial tcp 10.217.0.21:5443: connect: connection refused" Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.198906 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-99jq8"] Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.201733 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7hhdh"] Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.221151 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4ptdj" event={"ID":"3f392f5f-bc9d-4730-ba92-c9083f55f6e9","Type":"ContainerStarted","Data":"418c303605a988714deb68a4339258a599218462126719f2e2165500285186d7"} Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.240361 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ghhvz" podStartSLOduration=123.240344112 podStartE2EDuration="2m3.240344112s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:32.221071425 +0000 UTC m=+143.987061436" watchObservedRunningTime="2026-02-03 00:17:32.240344112 +0000 UTC m=+144.006334123" Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.290797 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.291877 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" event={"ID":"0c4c1155-ba10-4dd4-95a8-105c5c7168eb","Type":"ContainerStarted","Data":"0d7760dad2cc4819973300069c3b491210a28d4e66cc69426c3453167e5618c5"} Feb 03 00:17:32 crc kubenswrapper[4798]: E0203 00:17:32.292270 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:17:32.792252427 +0000 UTC m=+144.558242438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.316333 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-k8pqq" event={"ID":"e80fb785-eead-4ebd-9e0c-0f5c548c257c","Type":"ContainerStarted","Data":"315c055ee50b028d785ddb77766ca526d64ce7c3710d0253ad1fe11404bc8cda"} Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.317074 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-k8pqq" Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.320149 4798 patch_prober.go:28] interesting pod/downloads-7954f5f757-k8pqq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.320193 4798 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k8pqq" podUID="e80fb785-eead-4ebd-9e0c-0f5c548c257c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.325278 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zgqjc" event={"ID":"47d61fbc-8622-45c3-a0fd-b52320050c34","Type":"ContainerStarted","Data":"1e56f9a1996499143e578bc50542d954fba9bd55f4db724307995323afd19bb9"} Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.327777 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-jxs7f" event={"ID":"1e0d80fe-a3b6-4d24-9353-fe1ed0df2da5","Type":"ContainerStarted","Data":"6cb2952655061f1ab29b0554ee5e7ecfc0d13e8270150acee4686d301c60d1fa"} Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.327810 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-jxs7f" event={"ID":"1e0d80fe-a3b6-4d24-9353-fe1ed0df2da5","Type":"ContainerStarted","Data":"9f2da50b787b5dbacd2c47d618d7e05fbcde77c12475ee07648b1a71be958b76"} Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.330426 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p5f22" event={"ID":"cc175a81-0556-44fd-95b9-1ec1e9af0f69","Type":"ContainerStarted","Data":"7a0bd8b1a223ad8445e03f8f19de9cc15aa6c6bb15f35052bc5aded69af624a6"} Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.344315 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gtv4l" event={"ID":"ddfa4269-a259-466c-b0fd-be3ae32849ee","Type":"ContainerStarted","Data":"4ef5c2a07428bc41fce86d79d57ce7c9281282511b1034b342141eb127107f10"} Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.364971 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rsdzb" event={"ID":"f9310c18-e995-4f6b-b49a-c0cd2f574506","Type":"ContainerStarted","Data":"9b2b2ed6057ce5d3c852f1322b3b4ecce834809b8ff71d1af5301ad3d67f34ed"} Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.368774 4798 generic.go:334] "Generic (PLEG): container finished" podID="3fa10d48-6402-4828-a366-1a9ea825a444" containerID="d8157e98491c44a9ca02dacdd129013492b456c77e41c5a7b17d230689627d89" exitCode=0 Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.368832 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jth29" event={"ID":"3fa10d48-6402-4828-a366-1a9ea825a444","Type":"ContainerDied","Data":"d8157e98491c44a9ca02dacdd129013492b456c77e41c5a7b17d230689627d89"} Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.392293 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:32 crc kubenswrapper[4798]: E0203 00:17:32.392572 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 00:17:32.89256073 +0000 UTC m=+144.658550741 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxw86" (UID: "691167fd-4218-4be3-bd41-39486e614ab4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.392812 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lk9zk" event={"ID":"a8b814f1-a5f8-4349-8695-deb0ef824ff8","Type":"ContainerStarted","Data":"05e8fe0dbdd0d842ee77b6359dbefd90305cbb04977b1bb07f62595a57993d5f"} Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.392917 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lk9zk" event={"ID":"a8b814f1-a5f8-4349-8695-deb0ef824ff8","Type":"ContainerStarted","Data":"370cd02f6fc0bf872adcb4556afaaf1a1d8b0a0647ffd2578c6510e54259c3f0"} Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.415747 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-sw2b4" event={"ID":"11dc6c8b-70db-4d88-8792-a4a100330c8a","Type":"ContainerStarted","Data":"902eaa350dac4b5d38d988546dafc078eac75ec20f2f59887e2940e9eb864e90"} Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.452623 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-28wct" podStartSLOduration=123.452604697 podStartE2EDuration="2m3.452604697s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:32.438942359 +0000 UTC m=+144.204932370" watchObservedRunningTime="2026-02-03 00:17:32.452604697 +0000 UTC m=+144.218594708" Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.506391 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ckdgn" event={"ID":"66b52bf0-b7fc-4518-a958-64995f5c00b0","Type":"ContainerStarted","Data":"decaec70be3c3578f7c5da94919c8baab3cb481fe63abaf41e5fd7de1cfd7a70"} Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.508834 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mntxm" podStartSLOduration=123.508809888 podStartE2EDuration="2m3.508809888s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:32.498549704 +0000 UTC m=+144.264539715" watchObservedRunningTime="2026-02-03 00:17:32.508809888 +0000 UTC m=+144.274799889" Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.522003 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:17:32 crc kubenswrapper[4798]: E0203 00:17:32.523285 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:17:33.023266807 +0000 UTC m=+144.789256818 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.558859 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-4c8fd" event={"ID":"a963b867-b963-4b9e-abe9-47088ff98bea","Type":"ContainerStarted","Data":"3f43670b62d3be45cc1bd3aa0cdc34bfe25225745df1fea64fb2e8815ff6a382"} Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.574672 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lzsmv" event={"ID":"f0fa30af-feba-4653-8b34-bfc34eed90da","Type":"ContainerStarted","Data":"bfa66a1c1bd967e5b0a636a07813bf90b097e794dfc510ece7ffbea7b1f45873"} Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.580085 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdl4h" event={"ID":"a2481f1c-dd12-4275-be76-110b4ad35541","Type":"ContainerStarted","Data":"5539fbceeec0aa840b7d38fdd72a584506c1d6f5c47f2b281ec567382d407eac"} Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.580112 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdl4h" event={"ID":"a2481f1c-dd12-4275-be76-110b4ad35541","Type":"ContainerStarted","Data":"3265928231a9beb5acaabf33b624569eb7e8e8d1b75c630c7f533c550898a374"} Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.594441 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bwvjw" event={"ID":"4af79305-dc34-45ee-a27d-f8a6f08ab9c4","Type":"ContainerStarted","Data":"900b67e11285141dcb600fc0b19afb16114857713432b0a9d24b53db8743e526"} Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.605253 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29501280-nhgw5" podStartSLOduration=123.605234436 podStartE2EDuration="2m3.605234436s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:32.554312665 +0000 UTC m=+144.320302676" watchObservedRunningTime="2026-02-03 00:17:32.605234436 +0000 UTC m=+144.371224447" Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.606911 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-k8pqq" podStartSLOduration=123.606904087 podStartE2EDuration="2m3.606904087s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:32.588103722 +0000 UTC m=+144.354093733" watchObservedRunningTime="2026-02-03 00:17:32.606904087 +0000 UTC m=+144.372894098" Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.612399 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-tpkwn" Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.615147 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gtv4l" podStartSLOduration=123.61513177 podStartE2EDuration="2m3.61513177s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:32.614134127 +0000 UTC m=+144.380124138" watchObservedRunningTime="2026-02-03 00:17:32.61513177 +0000 UTC m=+144.381121781" Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.625420 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:32 crc kubenswrapper[4798]: E0203 00:17:32.625737 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 00:17:33.125726903 +0000 UTC m=+144.891716914 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxw86" (UID: "691167fd-4218-4be3-bd41-39486e614ab4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.664870 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6pxdl" podStartSLOduration=123.664857322 podStartE2EDuration="2m3.664857322s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:32.662934815 +0000 UTC m=+144.428924826" watchObservedRunningTime="2026-02-03 00:17:32.664857322 +0000 UTC m=+144.430847333" Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.726326 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:17:32 crc kubenswrapper[4798]: E0203 00:17:32.728596 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:17:33.228553908 +0000 UTC m=+144.994544080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.744758 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8gmh8" podStartSLOduration=123.74473917 podStartE2EDuration="2m3.74473917s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:32.744562855 +0000 UTC m=+144.510552876" watchObservedRunningTime="2026-02-03 00:17:32.74473917 +0000 UTC m=+144.510729181" Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.745114 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-r8z8z" podStartSLOduration=123.745109699 podStartE2EDuration="2m3.745109699s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:32.702181766 +0000 UTC m=+144.468171777" watchObservedRunningTime="2026-02-03 00:17:32.745109699 +0000 UTC m=+144.511099710" Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.780949 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-jxs7f" podStartSLOduration=5.780936986 podStartE2EDuration="5.780936986s" podCreationTimestamp="2026-02-03 00:17:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:32.779969212 +0000 UTC m=+144.545959223" watchObservedRunningTime="2026-02-03 00:17:32.780936986 +0000 UTC m=+144.546926997" Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.811206 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-sw2b4" podStartSLOduration=123.811185044 podStartE2EDuration="2m3.811185044s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:32.810740713 +0000 UTC m=+144.576730724" watchObservedRunningTime="2026-02-03 00:17:32.811185044 +0000 UTC m=+144.577175055" Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.830438 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:32 crc kubenswrapper[4798]: E0203 00:17:32.830822 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 00:17:33.33081032 +0000 UTC m=+145.096800331 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxw86" (UID: "691167fd-4218-4be3-bd41-39486e614ab4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.931349 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:17:32 crc kubenswrapper[4798]: E0203 00:17:32.931530 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:17:33.431505633 +0000 UTC m=+145.197495644 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.934910 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:32 crc kubenswrapper[4798]: E0203 00:17:32.935396 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 00:17:33.435376249 +0000 UTC m=+145.201366260 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxw86" (UID: "691167fd-4218-4be3-bd41-39486e614ab4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.973640 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lk9zk" podStartSLOduration=123.973616195 podStartE2EDuration="2m3.973616195s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:32.919204919 +0000 UTC m=+144.685194950" watchObservedRunningTime="2026-02-03 00:17:32.973616195 +0000 UTC m=+144.739606196" Feb 03 00:17:32 crc kubenswrapper[4798]: I0203 00:17:32.974079 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-4c8fd" podStartSLOduration=123.974071847 podStartE2EDuration="2m3.974071847s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:32.968295735 +0000 UTC m=+144.734285746" watchObservedRunningTime="2026-02-03 00:17:32.974071847 +0000 UTC m=+144.740061858" Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.050397 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:17:33 crc kubenswrapper[4798]: E0203 00:17:33.050723 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:17:33.550694774 +0000 UTC m=+145.316684785 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.051689 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:33 crc kubenswrapper[4798]: E0203 00:17:33.064430 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 00:17:33.564399243 +0000 UTC m=+145.330389254 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxw86" (UID: "691167fd-4218-4be3-bd41-39486e614ab4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.168454 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:17:33 crc kubenswrapper[4798]: E0203 00:17:33.168926 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:17:33.668911041 +0000 UTC m=+145.434901052 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.169141 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-28wct" Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.169465 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-28wct" Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.272064 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-4c8fd" Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.273388 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:33 crc kubenswrapper[4798]: E0203 00:17:33.273954 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 00:17:33.773937341 +0000 UTC m=+145.539927352 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxw86" (UID: "691167fd-4218-4be3-bd41-39486e614ab4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.280444 4798 patch_prober.go:28] interesting pod/router-default-5444994796-4c8fd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 00:17:33 crc kubenswrapper[4798]: [-]has-synced failed: reason withheld Feb 03 00:17:33 crc kubenswrapper[4798]: [+]process-running ok Feb 03 00:17:33 crc kubenswrapper[4798]: healthz check failed Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.280503 4798 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4c8fd" podUID="a963b867-b963-4b9e-abe9-47088ff98bea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.376306 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:17:33 crc kubenswrapper[4798]: E0203 00:17:33.376804 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:17:33.876761716 +0000 UTC m=+145.642751727 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.486473 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:33 crc kubenswrapper[4798]: E0203 00:17:33.487169 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 00:17:33.98715627 +0000 UTC m=+145.753146281 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxw86" (UID: "691167fd-4218-4be3-bd41-39486e614ab4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.587387 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:17:33 crc kubenswrapper[4798]: E0203 00:17:33.587839 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:17:34.087795791 +0000 UTC m=+145.853785802 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.628125 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7hhdh" event={"ID":"0480edbf-2598-4ac3-ab98-57a7e58f8cc0","Type":"ContainerStarted","Data":"a7b388ee378a0cee81314995fa9417a1dd9c676ee7dc72148a1e6addd2836a40"} Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.628176 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7hhdh" event={"ID":"0480edbf-2598-4ac3-ab98-57a7e58f8cc0","Type":"ContainerStarted","Data":"c980be029364dc0b8db8eb7dc74be1a2eb41f4f000e16aec0efd2fa407e38e92"} Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.648476 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lzsmv" event={"ID":"f0fa30af-feba-4653-8b34-bfc34eed90da","Type":"ContainerStarted","Data":"c105fcb00ce2615fda02812c4b0043b795678926a85d9bb0a360f25026dfb2ee"} Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.648542 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lzsmv" event={"ID":"f0fa30af-feba-4653-8b34-bfc34eed90da","Type":"ContainerStarted","Data":"6307b02fe210e7ef912d9a3a7b7c7f43a7f0a01462287ffb34a088e985821eb5"} Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.657496 4798 patch_prober.go:28] interesting pod/apiserver-76f77b778f-28wct container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 03 00:17:33 crc kubenswrapper[4798]: [+]log ok Feb 03 00:17:33 crc kubenswrapper[4798]: [+]etcd ok Feb 03 00:17:33 crc kubenswrapper[4798]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 03 00:17:33 crc kubenswrapper[4798]: [+]poststarthook/generic-apiserver-start-informers ok Feb 03 00:17:33 crc kubenswrapper[4798]: [+]poststarthook/max-in-flight-filter ok Feb 03 00:17:33 crc kubenswrapper[4798]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 03 00:17:33 crc kubenswrapper[4798]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 03 00:17:33 crc kubenswrapper[4798]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 03 00:17:33 crc kubenswrapper[4798]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 03 00:17:33 crc kubenswrapper[4798]: [+]poststarthook/project.openshift.io-projectcache ok Feb 03 00:17:33 crc kubenswrapper[4798]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 03 00:17:33 crc kubenswrapper[4798]: [+]poststarthook/openshift.io-startinformers ok Feb 03 00:17:33 crc kubenswrapper[4798]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 03 00:17:33 crc kubenswrapper[4798]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 03 00:17:33 crc kubenswrapper[4798]: livez check failed Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.657937 4798 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-28wct" podUID="ff545dc5-468e-410c-aacb-2c26ef11274e" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.675405 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7hhdh" podStartSLOduration=124.67538976 podStartE2EDuration="2m4.67538976s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:33.673020801 +0000 UTC m=+145.439010812" watchObservedRunningTime="2026-02-03 00:17:33.67538976 +0000 UTC m=+145.441379761" Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.675699 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-nwr6q" event={"ID":"b35482f6-84f0-4bbc-8bd4-2787b5912589","Type":"ContainerStarted","Data":"218af51bbe58986c63785649567fea3e1c2e612fbaf0c600bb1555d11041266f"} Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.675737 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-nwr6q" event={"ID":"b35482f6-84f0-4bbc-8bd4-2787b5912589","Type":"ContainerStarted","Data":"83038458763c7615c24b37e27d8923c82cf05537114095161e9dcd4e18e0242e"} Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.688619 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:33 crc kubenswrapper[4798]: E0203 00:17:33.688936 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 00:17:34.188925145 +0000 UTC m=+145.954915156 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxw86" (UID: "691167fd-4218-4be3-bd41-39486e614ab4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.689810 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bwvjw" event={"ID":"4af79305-dc34-45ee-a27d-f8a6f08ab9c4","Type":"ContainerStarted","Data":"49f8b0318f155b86cea94b27cd3d1dfcbe01d3973e31197c6900de4ff64c5a0b"} Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.691672 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-lmn6t" event={"ID":"2953b651-522d-4749-8416-b2c1922c26b3","Type":"ContainerStarted","Data":"e7c72a564aebc7c052fdf72c5b75cff3531e4e3e0bde6a30116bb325dff34ba3"} Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.691697 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-lmn6t" event={"ID":"2953b651-522d-4749-8416-b2c1922c26b3","Type":"ContainerStarted","Data":"80a61589d148c03de6a5cd713647a812aef9648bbafaafc84d1282145f8773ac"} Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.733101 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501295-v5q6w" event={"ID":"9ba81b6e-6d73-4dc4-90ba-4690b971f882","Type":"ContainerStarted","Data":"84103a591dfafbb75f0325730f44182cf836a40c65b1a67c9216841674590efc"} Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.758967 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9vrvn" event={"ID":"a5b455ef-fa69-453a-9c72-6e952faae9db","Type":"ContainerStarted","Data":"11168b2c67078496ca27e02864686c3101140a58cc69850eeb554cb86cf7e380"} Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.770701 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8p8fg" event={"ID":"5ea79d8a-686d-411b-8996-82a2e2a669fe","Type":"ContainerStarted","Data":"6d8fd3e097c4092d02f7e076ffa2eb55dae9ec0e0a88172dcc80bc3217c73c85"} Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.770775 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8p8fg" event={"ID":"5ea79d8a-686d-411b-8996-82a2e2a669fe","Type":"ContainerStarted","Data":"4ed47c52b13e3fb2c8c56dce57c55fa45f384aa44b713095f479b9293db89a7c"} Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.778105 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-nwr6q" podStartSLOduration=124.778082552 podStartE2EDuration="2m4.778082552s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:33.743062895 +0000 UTC m=+145.509052906" watchObservedRunningTime="2026-02-03 00:17:33.778082552 +0000 UTC m=+145.544072563" Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.779419 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bwvjw" podStartSLOduration=124.779411515 podStartE2EDuration="2m4.779411515s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:33.778338798 +0000 UTC m=+145.544328809" watchObservedRunningTime="2026-02-03 00:17:33.779411515 +0000 UTC m=+145.545401526" Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.803614 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:17:33 crc kubenswrapper[4798]: E0203 00:17:33.805196 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:17:34.305169523 +0000 UTC m=+146.071159534 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.830633 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9xz4k" event={"ID":"81abe77a-25d5-4fe5-a592-e83853be1b63","Type":"ContainerStarted","Data":"3b2c631006ee33d12528d3284efcc852a88be073a7a51aa6eb4801b19d7cbf8b"} Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.830718 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9xz4k" event={"ID":"81abe77a-25d5-4fe5-a592-e83853be1b63","Type":"ContainerStarted","Data":"bfd901c958dddde302b2795ebece6f55a38614ec439b4d9e587830ae1dd9d9bb"} Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.831419 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-9xz4k" Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.858590 4798 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9xz4k container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/healthz\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.858734 4798 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-9xz4k" podUID="81abe77a-25d5-4fe5-a592-e83853be1b63" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.22:8080/healthz\": dial tcp 10.217.0.22:8080: connect: connection refused" Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.866717 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4ptdj" event={"ID":"3f392f5f-bc9d-4730-ba92-c9083f55f6e9","Type":"ContainerStarted","Data":"9260583d016a0944552b365603222aee8ae40ff27836ee93ad280bb475259a56"} Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.867588 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4ptdj" Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.868667 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29501295-v5q6w" podStartSLOduration=124.868635524 podStartE2EDuration="2m4.868635524s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:33.851712266 +0000 UTC m=+145.617702277" watchObservedRunningTime="2026-02-03 00:17:33.868635524 +0000 UTC m=+145.634625535" Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.870425 4798 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-4ptdj container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.870464 4798 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4ptdj" podUID="3f392f5f-bc9d-4730-ba92-c9083f55f6e9" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.893863 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" event={"ID":"0c4c1155-ba10-4dd4-95a8-105c5c7168eb","Type":"ContainerStarted","Data":"fd2b98d45d9a65c60fc8d6bb9182964da7c7936c6a066ffda51c12db83b8ca68"} Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.894949 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.905435 4798 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-7m7t5 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.15:6443/healthz\": dial tcp 10.217.0.15:6443: connect: connection refused" start-of-body= Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.905560 4798 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" podUID="0c4c1155-ba10-4dd4-95a8-105c5c7168eb" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.15:6443/healthz\": dial tcp 10.217.0.15:6443: connect: connection refused" Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.906080 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:33 crc kubenswrapper[4798]: E0203 00:17:33.908060 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 00:17:34.408011579 +0000 UTC m=+146.174001590 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxw86" (UID: "691167fd-4218-4be3-bd41-39486e614ab4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.921426 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-lmn6t" podStartSLOduration=124.92140387 podStartE2EDuration="2m4.92140387s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:33.920039226 +0000 UTC m=+145.686029237" watchObservedRunningTime="2026-02-03 00:17:33.92140387 +0000 UTC m=+145.687393881" Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.937267 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jth29" event={"ID":"3fa10d48-6402-4828-a366-1a9ea825a444","Type":"ContainerStarted","Data":"dbb6fe687eddf03a5ac36d592061a4b130982989f4c61a6defd8e6485dd22d21"} Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.937784 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jth29" Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.949353 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t2s9r" event={"ID":"67a133bc-8ad5-4088-b5ec-122ec4f32c4d","Type":"ContainerStarted","Data":"6a5bf2fa32afbdcb48ac226700c29661ca6945b9664d58becae5fe050b1a4ff2"} Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.954694 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zgqjc" event={"ID":"47d61fbc-8622-45c3-a0fd-b52320050c34","Type":"ContainerStarted","Data":"0d8efc4107aa3c7d7770183a6ad8bfbe76c2f989bd148c1028378977a7c1b9fd"} Feb 03 00:17:33 crc kubenswrapper[4798]: I0203 00:17:33.962499 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-9xz4k" podStartSLOduration=124.962475408 podStartE2EDuration="2m4.962475408s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:33.960701853 +0000 UTC m=+145.726691874" watchObservedRunningTime="2026-02-03 00:17:33.962475408 +0000 UTC m=+145.728465419" Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.018814 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:17:34 crc kubenswrapper[4798]: E0203 00:17:34.020081 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:17:34.520066703 +0000 UTC m=+146.286056714 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.041145 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7hf94" event={"ID":"00003ad4-3d39-4600-ac7e-11b35b3d67c1","Type":"ContainerStarted","Data":"a6a2fa79f4dc6de0a72bf8bddcd7f71efdf3a7036ee1f0f66281c6e76584b330"} Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.082999 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" podStartSLOduration=125.082981211 podStartE2EDuration="2m5.082981211s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:34.017679884 +0000 UTC m=+145.783669895" watchObservedRunningTime="2026-02-03 00:17:34.082981211 +0000 UTC m=+145.848971222" Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.093525 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9v9rv" event={"ID":"c9c9866e-cbf6-47d0-91f1-25412166eb4f","Type":"ContainerStarted","Data":"9627e403db4b2ab0803dad2027f31995ca9aba9c2e80862288acca9eb066f682"} Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.093835 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9v9rv" event={"ID":"c9c9866e-cbf6-47d0-91f1-25412166eb4f","Type":"ContainerStarted","Data":"79933f67c32c2347dfae068d67059eb14e69464a9584c4e5e4905e5fbe2dc8b1"} Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.093918 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9v9rv" event={"ID":"c9c9866e-cbf6-47d0-91f1-25412166eb4f","Type":"ContainerStarted","Data":"45d3d4700e4f5895228b27b92ec698993b23582b6f5637ee067acd7ad2c61b5e"} Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.094562 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9v9rv" Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.121214 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4ptdj" podStartSLOduration=125.121190037 podStartE2EDuration="2m5.121190037s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:34.085212406 +0000 UTC m=+145.851202417" watchObservedRunningTime="2026-02-03 00:17:34.121190037 +0000 UTC m=+145.887180048" Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.121964 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jth29" podStartSLOduration=125.121959386 podStartE2EDuration="2m5.121959386s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:34.121836032 +0000 UTC m=+145.887826043" watchObservedRunningTime="2026-02-03 00:17:34.121959386 +0000 UTC m=+145.887949397" Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.125269 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dz2lb" event={"ID":"b86e799d-e230-4159-9a60-a92b5caee0fa","Type":"ContainerStarted","Data":"b6b5ee38480a9c2f7b7d8ec0300db1cd60decb9dd4afd17b6cc10428d9ceea00"} Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.130437 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:34 crc kubenswrapper[4798]: E0203 00:17:34.131544 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 00:17:34.631520132 +0000 UTC m=+146.397510323 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxw86" (UID: "691167fd-4218-4be3-bd41-39486e614ab4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.131843 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdl4h" event={"ID":"a2481f1c-dd12-4275-be76-110b4ad35541","Type":"ContainerStarted","Data":"c60bf2bd1c28c0bf74acc273a67aca06a016d11bd9b98fd00ad2e9d31944ff01"} Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.145136 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ckdgn" event={"ID":"66b52bf0-b7fc-4518-a958-64995f5c00b0","Type":"ContainerStarted","Data":"567b27274f04a1f8bb71d0dd580e7e6a639ebc15d230932b583c079f50863695"} Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.153866 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5wrch" event={"ID":"eb78bcb9-c424-4592-99c4-e1a4d711d81b","Type":"ContainerStarted","Data":"4c8339c4ff175c41d6cb049642c86855f94a9f0b4defd37bf54f05d33d960630"} Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.153922 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5wrch" event={"ID":"eb78bcb9-c424-4592-99c4-e1a4d711d81b","Type":"ContainerStarted","Data":"42bf90ccf59e1eed7f2b6cdb3e006535704ea0535f97596f033ea5b23c0f043a"} Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.157263 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-99jq8" event={"ID":"dfea4e8f-4296-4aad-97b3-0d571e9c527c","Type":"ContainerStarted","Data":"f64916c82086da0c5804413b9f7335cef98662ada6e4b10dfecea87c0048e757"} Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.157293 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-99jq8" event={"ID":"dfea4e8f-4296-4aad-97b3-0d571e9c527c","Type":"ContainerStarted","Data":"96882ca766257174d22b8e3e21439b80528b24c73709065e469747354f331baf"} Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.170900 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-th92f" event={"ID":"841ea4a0-8ab8-4a72-96a8-40578497e9c4","Type":"ContainerStarted","Data":"9df1ba4da1e8744e3bbed74978fa08f5b673b0c8034b50e465dd7a9f2e8edab3"} Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.184697 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zgqjc" podStartSLOduration=125.184679298 podStartE2EDuration="2m5.184679298s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:34.184238188 +0000 UTC m=+145.950228199" watchObservedRunningTime="2026-02-03 00:17:34.184679298 +0000 UTC m=+145.950669309" Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.191097 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nq4gt" event={"ID":"e0dc408c-7d0c-4c18-a500-bcffb273b616","Type":"ContainerStarted","Data":"452b7d40bb4dc2b19514205a9628fbf207b07d47caa696555ef4b77fc9ff009e"} Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.202874 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p5f22" event={"ID":"cc175a81-0556-44fd-95b9-1ec1e9af0f69","Type":"ContainerStarted","Data":"806d261298603938f69a5e3c15e332af80ae019df6a4ceb2b3246c51d6170532"} Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.203327 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p5f22" Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.203403 4798 patch_prober.go:28] interesting pod/downloads-7954f5f757-k8pqq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.203434 4798 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k8pqq" podUID="e80fb785-eead-4ebd-9e0c-0f5c548c257c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.215105 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7hf94" podStartSLOduration=125.215083111 podStartE2EDuration="2m5.215083111s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:34.212402255 +0000 UTC m=+145.978392266" watchObservedRunningTime="2026-02-03 00:17:34.215083111 +0000 UTC m=+145.981073122" Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.222620 4798 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-p5f22 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:8443/healthz\": dial tcp 10.217.0.43:8443: connect: connection refused" start-of-body= Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.222686 4798 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p5f22" podUID="cc175a81-0556-44fd-95b9-1ec1e9af0f69" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.43:8443/healthz\": dial tcp 10.217.0.43:8443: connect: connection refused" Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.238223 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:17:34 crc kubenswrapper[4798]: E0203 00:17:34.244133 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:17:34.74410872 +0000 UTC m=+146.510098731 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.276780 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6pxdl" Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.285842 4798 patch_prober.go:28] interesting pod/router-default-5444994796-4c8fd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 00:17:34 crc kubenswrapper[4798]: [-]has-synced failed: reason withheld Feb 03 00:17:34 crc kubenswrapper[4798]: [+]process-running ok Feb 03 00:17:34 crc kubenswrapper[4798]: healthz check failed Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.285933 4798 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4c8fd" podUID="a963b867-b963-4b9e-abe9-47088ff98bea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.286986 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t2s9r" podStartSLOduration=125.286970251 podStartE2EDuration="2m5.286970251s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:34.247520064 +0000 UTC m=+146.013510065" watchObservedRunningTime="2026-02-03 00:17:34.286970251 +0000 UTC m=+146.052960262" Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.289760 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-5wrch" podStartSLOduration=7.289751099 podStartE2EDuration="7.289751099s" podCreationTimestamp="2026-02-03 00:17:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:34.28729988 +0000 UTC m=+146.053289891" watchObservedRunningTime="2026-02-03 00:17:34.289751099 +0000 UTC m=+146.055741110" Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.339446 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p5f22" podStartSLOduration=125.33942662 podStartE2EDuration="2m5.33942662s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:34.33297241 +0000 UTC m=+146.098962421" watchObservedRunningTime="2026-02-03 00:17:34.33942662 +0000 UTC m=+146.105416631" Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.342663 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:34 crc kubenswrapper[4798]: E0203 00:17:34.342998 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 00:17:34.842987227 +0000 UTC m=+146.608977228 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxw86" (UID: "691167fd-4218-4be3-bd41-39486e614ab4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.363186 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9v9rv" podStartSLOduration=125.363155347 podStartE2EDuration="2m5.363155347s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:34.359705861 +0000 UTC m=+146.125695892" watchObservedRunningTime="2026-02-03 00:17:34.363155347 +0000 UTC m=+146.129145358" Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.396199 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dz2lb" podStartSLOduration=125.396181364 podStartE2EDuration="2m5.396181364s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:34.395895277 +0000 UTC m=+146.161885298" watchObservedRunningTime="2026-02-03 00:17:34.396181364 +0000 UTC m=+146.162171375" Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.431808 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nq4gt" podStartSLOduration=125.431769126 podStartE2EDuration="2m5.431769126s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:34.429855228 +0000 UTC m=+146.195845239" watchObservedRunningTime="2026-02-03 00:17:34.431769126 +0000 UTC m=+146.197759137" Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.444684 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:17:34 crc kubenswrapper[4798]: E0203 00:17:34.445334 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:17:34.94530794 +0000 UTC m=+146.711297951 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.451029 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dz2lb" Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.451532 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dz2lb" Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.512943 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-99jq8" podStartSLOduration=125.512924425 podStartE2EDuration="2m5.512924425s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:34.512176886 +0000 UTC m=+146.278166897" watchObservedRunningTime="2026-02-03 00:17:34.512924425 +0000 UTC m=+146.278914446" Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.513205 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-th92f" podStartSLOduration=125.513198562 podStartE2EDuration="2m5.513198562s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:34.466035194 +0000 UTC m=+146.232025205" watchObservedRunningTime="2026-02-03 00:17:34.513198562 +0000 UTC m=+146.279188573" Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.546686 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:34 crc kubenswrapper[4798]: E0203 00:17:34.547192 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 00:17:35.047176633 +0000 UTC m=+146.813166644 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxw86" (UID: "691167fd-4218-4be3-bd41-39486e614ab4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.571948 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdl4h" podStartSLOduration=125.571926456 podStartE2EDuration="2m5.571926456s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:34.542998999 +0000 UTC m=+146.308989010" watchObservedRunningTime="2026-02-03 00:17:34.571926456 +0000 UTC m=+146.337916467" Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.651396 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:17:34 crc kubenswrapper[4798]: E0203 00:17:34.652298 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:17:35.152261754 +0000 UTC m=+146.918251765 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.753923 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:34 crc kubenswrapper[4798]: E0203 00:17:34.754444 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 00:17:35.254432424 +0000 UTC m=+147.020422435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxw86" (UID: "691167fd-4218-4be3-bd41-39486e614ab4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.809675 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-r8z8z" Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.862939 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:17:34 crc kubenswrapper[4798]: E0203 00:17:34.863533 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:17:35.363513134 +0000 UTC m=+147.129503145 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.957735 4798 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-mntxm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 03 00:17:34 crc kubenswrapper[4798]: [+]log ok Feb 03 00:17:34 crc kubenswrapper[4798]: [+]poststarthook/generic-apiserver-start-informers ok Feb 03 00:17:34 crc kubenswrapper[4798]: [-]poststarthook/max-in-flight-filter failed: reason withheld Feb 03 00:17:34 crc kubenswrapper[4798]: [-]poststarthook/storage-object-count-tracker-hook failed: reason withheld Feb 03 00:17:34 crc kubenswrapper[4798]: healthz check failed Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.958182 4798 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mntxm" podUID="47fb3222-46ee-4e02-9a2d-ecc944492a83" containerName="packageserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 00:17:34 crc kubenswrapper[4798]: I0203 00:17:34.966577 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:34 crc kubenswrapper[4798]: E0203 00:17:34.966920 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 00:17:35.466905024 +0000 UTC m=+147.232895035 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxw86" (UID: "691167fd-4218-4be3-bd41-39486e614ab4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:35 crc kubenswrapper[4798]: I0203 00:17:35.067684 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:17:35 crc kubenswrapper[4798]: E0203 00:17:35.067959 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:17:35.567913085 +0000 UTC m=+147.333903096 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:35 crc kubenswrapper[4798]: I0203 00:17:35.068126 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:35 crc kubenswrapper[4798]: E0203 00:17:35.068559 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 00:17:35.568551611 +0000 UTC m=+147.334541622 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxw86" (UID: "691167fd-4218-4be3-bd41-39486e614ab4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:35 crc kubenswrapper[4798]: I0203 00:17:35.169775 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:17:35 crc kubenswrapper[4798]: E0203 00:17:35.169977 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:17:35.669940391 +0000 UTC m=+147.435930402 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:35 crc kubenswrapper[4798]: I0203 00:17:35.170117 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:35 crc kubenswrapper[4798]: E0203 00:17:35.170573 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 00:17:35.670553656 +0000 UTC m=+147.436543667 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxw86" (UID: "691167fd-4218-4be3-bd41-39486e614ab4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:35 crc kubenswrapper[4798]: I0203 00:17:35.206987 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rsdzb" event={"ID":"f9310c18-e995-4f6b-b49a-c0cd2f574506","Type":"ContainerStarted","Data":"8af75c4ca0757edaac5dc3f1c3fd6af0bb0e744546b25a8c24ab956482fdbc8d"} Feb 03 00:17:35 crc kubenswrapper[4798]: I0203 00:17:35.208901 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9vrvn" event={"ID":"a5b455ef-fa69-453a-9c72-6e952faae9db","Type":"ContainerStarted","Data":"cdf7deeaba07db5b7b183f8c9e9a7d6a50435bdd19c1094c8da8f17538f31f80"} Feb 03 00:17:35 crc kubenswrapper[4798]: I0203 00:17:35.210873 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8p8fg" event={"ID":"5ea79d8a-686d-411b-8996-82a2e2a669fe","Type":"ContainerStarted","Data":"dff148458e4f518190750a1e4f7870bc6dab601b80c8c92ea8c00d32f9cd45e7"} Feb 03 00:17:35 crc kubenswrapper[4798]: I0203 00:17:35.212557 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ckdgn" event={"ID":"66b52bf0-b7fc-4518-a958-64995f5c00b0","Type":"ContainerStarted","Data":"a559d62c547693f04cd842c52bfef375cdc9e2f8198842f796fdfc69d57a7fb4"} Feb 03 00:17:35 crc kubenswrapper[4798]: I0203 00:17:35.212994 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-ckdgn" Feb 03 00:17:35 crc kubenswrapper[4798]: I0203 00:17:35.215321 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-99jq8" event={"ID":"dfea4e8f-4296-4aad-97b3-0d571e9c527c","Type":"ContainerStarted","Data":"e1c89a3b970f72622ad18cde02b9f5d0834df76a2159cff0e50beb213144d41c"} Feb 03 00:17:35 crc kubenswrapper[4798]: I0203 00:17:35.218839 4798 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9xz4k container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/healthz\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Feb 03 00:17:35 crc kubenswrapper[4798]: I0203 00:17:35.218910 4798 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-9xz4k" podUID="81abe77a-25d5-4fe5-a592-e83853be1b63" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.22:8080/healthz\": dial tcp 10.217.0.22:8080: connect: connection refused" Feb 03 00:17:35 crc kubenswrapper[4798]: I0203 00:17:35.222058 4798 patch_prober.go:28] interesting pod/downloads-7954f5f757-k8pqq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 03 00:17:35 crc kubenswrapper[4798]: I0203 00:17:35.222113 4798 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k8pqq" podUID="e80fb785-eead-4ebd-9e0c-0f5c548c257c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 03 00:17:35 crc kubenswrapper[4798]: I0203 00:17:35.225542 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4ptdj" Feb 03 00:17:35 crc kubenswrapper[4798]: I0203 00:17:35.229978 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mntxm" Feb 03 00:17:35 crc kubenswrapper[4798]: I0203 00:17:35.234268 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p5f22" Feb 03 00:17:35 crc kubenswrapper[4798]: I0203 00:17:35.270918 4798 patch_prober.go:28] interesting pod/router-default-5444994796-4c8fd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 00:17:35 crc kubenswrapper[4798]: [-]has-synced failed: reason withheld Feb 03 00:17:35 crc kubenswrapper[4798]: [+]process-running ok Feb 03 00:17:35 crc kubenswrapper[4798]: healthz check failed Feb 03 00:17:35 crc kubenswrapper[4798]: I0203 00:17:35.270978 4798 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4c8fd" podUID="a963b867-b963-4b9e-abe9-47088ff98bea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 00:17:35 crc kubenswrapper[4798]: I0203 00:17:35.271544 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:17:35 crc kubenswrapper[4798]: E0203 00:17:35.279726 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:17:35.779693078 +0000 UTC m=+147.545683089 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:35 crc kubenswrapper[4798]: I0203 00:17:35.308106 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-9vrvn" podStartSLOduration=126.308084121 podStartE2EDuration="2m6.308084121s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:35.305832965 +0000 UTC m=+147.071823006" watchObservedRunningTime="2026-02-03 00:17:35.308084121 +0000 UTC m=+147.074074132" Feb 03 00:17:35 crc kubenswrapper[4798]: I0203 00:17:35.379205 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:35 crc kubenswrapper[4798]: E0203 00:17:35.379609 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 00:17:35.879592421 +0000 UTC m=+147.645582432 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxw86" (UID: "691167fd-4218-4be3-bd41-39486e614ab4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:35 crc kubenswrapper[4798]: I0203 00:17:35.425690 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dz2lb" Feb 03 00:17:35 crc kubenswrapper[4798]: I0203 00:17:35.481328 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:17:35 crc kubenswrapper[4798]: E0203 00:17:35.481682 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:17:35.981667449 +0000 UTC m=+147.747657450 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:35 crc kubenswrapper[4798]: I0203 00:17:35.559468 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lzsmv" podStartSLOduration=126.559447254 podStartE2EDuration="2m6.559447254s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:35.460955136 +0000 UTC m=+147.226945157" watchObservedRunningTime="2026-02-03 00:17:35.559447254 +0000 UTC m=+147.325437265" Feb 03 00:17:35 crc kubenswrapper[4798]: I0203 00:17:35.583393 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:35 crc kubenswrapper[4798]: E0203 00:17:35.583741 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 00:17:36.083730966 +0000 UTC m=+147.849720977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxw86" (UID: "691167fd-4218-4be3-bd41-39486e614ab4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:35 crc kubenswrapper[4798]: I0203 00:17:35.658140 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-ckdgn" podStartSLOduration=8.658121167000001 podStartE2EDuration="8.658121167s" podCreationTimestamp="2026-02-03 00:17:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:35.559934856 +0000 UTC m=+147.325924857" watchObservedRunningTime="2026-02-03 00:17:35.658121167 +0000 UTC m=+147.424111178" Feb 03 00:17:35 crc kubenswrapper[4798]: I0203 00:17:35.685227 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:17:35 crc kubenswrapper[4798]: E0203 00:17:35.685825 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:17:36.185807322 +0000 UTC m=+147.951797333 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:35 crc kubenswrapper[4798]: I0203 00:17:35.768925 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8p8fg" podStartSLOduration=126.76890862 podStartE2EDuration="2m6.76890862s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:35.720982314 +0000 UTC m=+147.486972325" watchObservedRunningTime="2026-02-03 00:17:35.76890862 +0000 UTC m=+147.534898631" Feb 03 00:17:35 crc kubenswrapper[4798]: I0203 00:17:35.788298 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:35 crc kubenswrapper[4798]: E0203 00:17:35.788851 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 00:17:36.288830433 +0000 UTC m=+148.054820434 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxw86" (UID: "691167fd-4218-4be3-bd41-39486e614ab4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:35 crc kubenswrapper[4798]: I0203 00:17:35.891199 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:17:35 crc kubenswrapper[4798]: E0203 00:17:35.891399 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:17:36.391360221 +0000 UTC m=+148.157350232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:35 crc kubenswrapper[4798]: I0203 00:17:35.891910 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:17:35 crc kubenswrapper[4798]: I0203 00:17:35.892010 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:35 crc kubenswrapper[4798]: I0203 00:17:35.892141 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:17:35 crc kubenswrapper[4798]: E0203 00:17:35.892957 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 00:17:36.39294787 +0000 UTC m=+148.158937881 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxw86" (UID: "691167fd-4218-4be3-bd41-39486e614ab4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:35 crc kubenswrapper[4798]: I0203 00:17:35.893128 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:17:35 crc kubenswrapper[4798]: I0203 00:17:35.898220 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:17:35 crc kubenswrapper[4798]: I0203 00:17:35.979561 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:17:35 crc kubenswrapper[4798]: I0203 00:17:35.993428 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:17:35 crc kubenswrapper[4798]: E0203 00:17:35.993693 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:17:36.493631763 +0000 UTC m=+148.259621774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:35 crc kubenswrapper[4798]: I0203 00:17:35.993807 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:17:35 crc kubenswrapper[4798]: I0203 00:17:35.993877 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:35 crc kubenswrapper[4798]: I0203 00:17:35.994004 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:17:35 crc kubenswrapper[4798]: E0203 00:17:35.994934 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 00:17:36.494921115 +0000 UTC m=+148.260911116 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxw86" (UID: "691167fd-4218-4be3-bd41-39486e614ab4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:35 crc kubenswrapper[4798]: I0203 00:17:35.997784 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.007055 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.036935 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.096253 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:17:36 crc kubenswrapper[4798]: E0203 00:17:36.096711 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:17:36.596694695 +0000 UTC m=+148.362684706 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.132227 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.147249 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.199407 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:36 crc kubenswrapper[4798]: E0203 00:17:36.199790 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 00:17:36.699772977 +0000 UTC m=+148.465762988 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxw86" (UID: "691167fd-4218-4be3-bd41-39486e614ab4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.278516 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rsdzb" event={"ID":"f9310c18-e995-4f6b-b49a-c0cd2f574506","Type":"ContainerStarted","Data":"81313a6fe58f34374002c7f8d12e55cb6f674e821e06acd1323606d33e5e74cf"} Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.278561 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rsdzb" event={"ID":"f9310c18-e995-4f6b-b49a-c0cd2f574506","Type":"ContainerStarted","Data":"449fb91108826fc0930d19c97e71d7ef334f2884587300c80938b23471e7af56"} Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.283815 4798 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9xz4k container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/healthz\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.283848 4798 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-9xz4k" podUID="81abe77a-25d5-4fe5-a592-e83853be1b63" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.22:8080/healthz\": dial tcp 10.217.0.22:8080: connect: connection refused" Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.288626 4798 patch_prober.go:28] interesting pod/router-default-5444994796-4c8fd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 00:17:36 crc kubenswrapper[4798]: [-]has-synced failed: reason withheld Feb 03 00:17:36 crc kubenswrapper[4798]: [+]process-running ok Feb 03 00:17:36 crc kubenswrapper[4798]: healthz check failed Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.288708 4798 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4c8fd" podUID="a963b867-b963-4b9e-abe9-47088ff98bea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.291958 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dz2lb" Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.309216 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:17:36 crc kubenswrapper[4798]: E0203 00:17:36.309405 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:17:36.80937855 +0000 UTC m=+148.575368551 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.309547 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:36 crc kubenswrapper[4798]: E0203 00:17:36.309844 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 00:17:36.809831971 +0000 UTC m=+148.575821982 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxw86" (UID: "691167fd-4218-4be3-bd41-39486e614ab4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.410070 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:17:36 crc kubenswrapper[4798]: E0203 00:17:36.411531 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:17:36.911515379 +0000 UTC m=+148.677505390 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.517518 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:36 crc kubenswrapper[4798]: E0203 00:17:36.517905 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 00:17:37.017886863 +0000 UTC m=+148.783876884 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxw86" (UID: "691167fd-4218-4be3-bd41-39486e614ab4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.545194 4798 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.584971 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-j4fw8"] Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.585924 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j4fw8" Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.589740 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.601463 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j4fw8"] Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.622869 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:17:36 crc kubenswrapper[4798]: E0203 00:17:36.623314 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:17:37.123299382 +0000 UTC m=+148.889289393 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:36 crc kubenswrapper[4798]: W0203 00:17:36.650835 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-55976e1a2252a081ec261c190ac5376b1a868f4159e088d50ed60ea2fbd59055 WatchSource:0}: Error finding container 55976e1a2252a081ec261c190ac5376b1a868f4159e088d50ed60ea2fbd59055: Status 404 returned error can't find the container with id 55976e1a2252a081ec261c190ac5376b1a868f4159e088d50ed60ea2fbd59055 Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.726350 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61992978-85f4-4395-b65a-d5efe47c79d8-utilities\") pod \"certified-operators-j4fw8\" (UID: \"61992978-85f4-4395-b65a-d5efe47c79d8\") " pod="openshift-marketplace/certified-operators-j4fw8" Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.726386 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61992978-85f4-4395-b65a-d5efe47c79d8-catalog-content\") pod \"certified-operators-j4fw8\" (UID: \"61992978-85f4-4395-b65a-d5efe47c79d8\") " pod="openshift-marketplace/certified-operators-j4fw8" Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.726415 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:36 crc kubenswrapper[4798]: E0203 00:17:36.726736 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 00:17:37.226722473 +0000 UTC m=+148.992712554 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxw86" (UID: "691167fd-4218-4be3-bd41-39486e614ab4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.726892 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5b8s\" (UniqueName: \"kubernetes.io/projected/61992978-85f4-4395-b65a-d5efe47c79d8-kube-api-access-d5b8s\") pod \"certified-operators-j4fw8\" (UID: \"61992978-85f4-4395-b65a-d5efe47c79d8\") " pod="openshift-marketplace/certified-operators-j4fw8" Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.798814 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9k8gx"] Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.801063 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9k8gx" Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.804865 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.810956 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9k8gx"] Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.828919 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.829172 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61992978-85f4-4395-b65a-d5efe47c79d8-utilities\") pod \"certified-operators-j4fw8\" (UID: \"61992978-85f4-4395-b65a-d5efe47c79d8\") " pod="openshift-marketplace/certified-operators-j4fw8" Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.829249 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61992978-85f4-4395-b65a-d5efe47c79d8-catalog-content\") pod \"certified-operators-j4fw8\" (UID: \"61992978-85f4-4395-b65a-d5efe47c79d8\") " pod="openshift-marketplace/certified-operators-j4fw8" Feb 03 00:17:36 crc kubenswrapper[4798]: E0203 00:17:36.829360 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:17:37.329312742 +0000 UTC m=+149.095302753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.829432 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.829821 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61992978-85f4-4395-b65a-d5efe47c79d8-catalog-content\") pod \"certified-operators-j4fw8\" (UID: \"61992978-85f4-4395-b65a-d5efe47c79d8\") " pod="openshift-marketplace/certified-operators-j4fw8" Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.830316 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61992978-85f4-4395-b65a-d5efe47c79d8-utilities\") pod \"certified-operators-j4fw8\" (UID: \"61992978-85f4-4395-b65a-d5efe47c79d8\") " pod="openshift-marketplace/certified-operators-j4fw8" Feb 03 00:17:36 crc kubenswrapper[4798]: E0203 00:17:36.832122 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 00:17:37.332111402 +0000 UTC m=+149.098101413 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxw86" (UID: "691167fd-4218-4be3-bd41-39486e614ab4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.832124 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5b8s\" (UniqueName: \"kubernetes.io/projected/61992978-85f4-4395-b65a-d5efe47c79d8-kube-api-access-d5b8s\") pod \"certified-operators-j4fw8\" (UID: \"61992978-85f4-4395-b65a-d5efe47c79d8\") " pod="openshift-marketplace/certified-operators-j4fw8" Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.856032 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5b8s\" (UniqueName: \"kubernetes.io/projected/61992978-85f4-4395-b65a-d5efe47c79d8-kube-api-access-d5b8s\") pod \"certified-operators-j4fw8\" (UID: \"61992978-85f4-4395-b65a-d5efe47c79d8\") " pod="openshift-marketplace/certified-operators-j4fw8" Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.901757 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j4fw8" Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.934864 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.935141 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d-utilities\") pod \"community-operators-9k8gx\" (UID: \"3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d\") " pod="openshift-marketplace/community-operators-9k8gx" Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.935183 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d-catalog-content\") pod \"community-operators-9k8gx\" (UID: \"3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d\") " pod="openshift-marketplace/community-operators-9k8gx" Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.935202 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m29fj\" (UniqueName: \"kubernetes.io/projected/3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d-kube-api-access-m29fj\") pod \"community-operators-9k8gx\" (UID: \"3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d\") " pod="openshift-marketplace/community-operators-9k8gx" Feb 03 00:17:36 crc kubenswrapper[4798]: E0203 00:17:36.935310 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:17:37.435281186 +0000 UTC m=+149.201271197 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.976549 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xs4wp"] Feb 03 00:17:36 crc kubenswrapper[4798]: I0203 00:17:36.977867 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xs4wp" Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.025776 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xs4wp"] Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.036700 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pxmk\" (UniqueName: \"kubernetes.io/projected/3d6b722e-bce7-4023-b30e-f9a460adb50c-kube-api-access-4pxmk\") pod \"certified-operators-xs4wp\" (UID: \"3d6b722e-bce7-4023-b30e-f9a460adb50c\") " pod="openshift-marketplace/certified-operators-xs4wp" Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.036756 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.036801 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d-utilities\") pod \"community-operators-9k8gx\" (UID: \"3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d\") " pod="openshift-marketplace/community-operators-9k8gx" Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.036917 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d6b722e-bce7-4023-b30e-f9a460adb50c-catalog-content\") pod \"certified-operators-xs4wp\" (UID: \"3d6b722e-bce7-4023-b30e-f9a460adb50c\") " pod="openshift-marketplace/certified-operators-xs4wp" Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.037019 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m29fj\" (UniqueName: \"kubernetes.io/projected/3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d-kube-api-access-m29fj\") pod \"community-operators-9k8gx\" (UID: \"3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d\") " pod="openshift-marketplace/community-operators-9k8gx" Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.037056 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d-catalog-content\") pod \"community-operators-9k8gx\" (UID: \"3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d\") " pod="openshift-marketplace/community-operators-9k8gx" Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.037079 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d6b722e-bce7-4023-b30e-f9a460adb50c-utilities\") pod \"certified-operators-xs4wp\" (UID: \"3d6b722e-bce7-4023-b30e-f9a460adb50c\") " pod="openshift-marketplace/certified-operators-xs4wp" Feb 03 00:17:37 crc kubenswrapper[4798]: E0203 00:17:37.037112 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 00:17:37.537095547 +0000 UTC m=+149.303085558 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxw86" (UID: "691167fd-4218-4be3-bd41-39486e614ab4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.037145 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d-utilities\") pod \"community-operators-9k8gx\" (UID: \"3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d\") " pod="openshift-marketplace/community-operators-9k8gx" Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.037572 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d-catalog-content\") pod \"community-operators-9k8gx\" (UID: \"3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d\") " pod="openshift-marketplace/community-operators-9k8gx" Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.064480 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m29fj\" (UniqueName: \"kubernetes.io/projected/3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d-kube-api-access-m29fj\") pod \"community-operators-9k8gx\" (UID: \"3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d\") " pod="openshift-marketplace/community-operators-9k8gx" Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.122305 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9k8gx" Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.138275 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.138522 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d6b722e-bce7-4023-b30e-f9a460adb50c-utilities\") pod \"certified-operators-xs4wp\" (UID: \"3d6b722e-bce7-4023-b30e-f9a460adb50c\") " pod="openshift-marketplace/certified-operators-xs4wp" Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.138566 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pxmk\" (UniqueName: \"kubernetes.io/projected/3d6b722e-bce7-4023-b30e-f9a460adb50c-kube-api-access-4pxmk\") pod \"certified-operators-xs4wp\" (UID: \"3d6b722e-bce7-4023-b30e-f9a460adb50c\") " pod="openshift-marketplace/certified-operators-xs4wp" Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.138638 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d6b722e-bce7-4023-b30e-f9a460adb50c-catalog-content\") pod \"certified-operators-xs4wp\" (UID: \"3d6b722e-bce7-4023-b30e-f9a460adb50c\") " pod="openshift-marketplace/certified-operators-xs4wp" Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.139081 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d6b722e-bce7-4023-b30e-f9a460adb50c-catalog-content\") pod \"certified-operators-xs4wp\" (UID: \"3d6b722e-bce7-4023-b30e-f9a460adb50c\") " pod="openshift-marketplace/certified-operators-xs4wp" Feb 03 00:17:37 crc kubenswrapper[4798]: E0203 00:17:37.139146 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 00:17:37.639132002 +0000 UTC m=+149.405122013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.139342 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d6b722e-bce7-4023-b30e-f9a460adb50c-utilities\") pod \"certified-operators-xs4wp\" (UID: \"3d6b722e-bce7-4023-b30e-f9a460adb50c\") " pod="openshift-marketplace/certified-operators-xs4wp" Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.168269 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pxmk\" (UniqueName: \"kubernetes.io/projected/3d6b722e-bce7-4023-b30e-f9a460adb50c-kube-api-access-4pxmk\") pod \"certified-operators-xs4wp\" (UID: \"3d6b722e-bce7-4023-b30e-f9a460adb50c\") " pod="openshift-marketplace/certified-operators-xs4wp" Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.188488 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bh68n"] Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.189581 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bh68n" Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.196816 4798 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-03T00:17:36.545233089Z","Handler":null,"Name":""} Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.206263 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bh68n"] Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.219392 4798 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.219442 4798 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.238786 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j4fw8"] Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.239381 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.243358 4798 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.243386 4798 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:37 crc kubenswrapper[4798]: W0203 00:17:37.269168 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61992978_85f4_4395_b65a_d5efe47c79d8.slice/crio-9b5577557129f00532b80038b737462a6e3093eeb48235f03942a167a7d12c09 WatchSource:0}: Error finding container 9b5577557129f00532b80038b737462a6e3093eeb48235f03942a167a7d12c09: Status 404 returned error can't find the container with id 9b5577557129f00532b80038b737462a6e3093eeb48235f03942a167a7d12c09 Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.273628 4798 patch_prober.go:28] interesting pod/router-default-5444994796-4c8fd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 00:17:37 crc kubenswrapper[4798]: [-]has-synced failed: reason withheld Feb 03 00:17:37 crc kubenswrapper[4798]: [+]process-running ok Feb 03 00:17:37 crc kubenswrapper[4798]: healthz check failed Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.273721 4798 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4c8fd" podUID="a963b867-b963-4b9e-abe9-47088ff98bea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.278381 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxw86\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.290736 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xs4wp" Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.307186 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rsdzb" event={"ID":"f9310c18-e995-4f6b-b49a-c0cd2f574506","Type":"ContainerStarted","Data":"0bdb7036f1433914756ddc2a46f74fd4661a077526317bff3752bd6e22f127a9"} Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.312148 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4fw8" event={"ID":"61992978-85f4-4395-b65a-d5efe47c79d8","Type":"ContainerStarted","Data":"9b5577557129f00532b80038b737462a6e3093eeb48235f03942a167a7d12c09"} Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.313754 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"745edbb5b923d7be25b22a01820ebc6fecbd4461f6c570338c4a48aa124de4fd"} Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.313801 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"0306b0ed854198e794c280aa2901982ed7a81b9db2820417f77d3e9c649e37b8"} Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.314353 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.317323 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"9d86da0dd83a5e6b35f0bbc2858fb8369462cb7b005883b5ef0ec6d8cc21b6b5"} Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.317367 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"55976e1a2252a081ec261c190ac5376b1a868f4159e088d50ed60ea2fbd59055"} Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.323233 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"be36cd795f92dd9d01b0283520a25f89e41eff4aaee78e13e7d356b140a38682"} Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.323269 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"b412b1cb1edb0641ba31468ab266b67157b184bb596443e2508353ed96dd6b25"} Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.340169 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.340394 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj27d\" (UniqueName: \"kubernetes.io/projected/3bf63648-6aaf-4a86-82a3-454f08b2973c-kube-api-access-tj27d\") pod \"community-operators-bh68n\" (UID: \"3bf63648-6aaf-4a86-82a3-454f08b2973c\") " pod="openshift-marketplace/community-operators-bh68n" Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.340459 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf63648-6aaf-4a86-82a3-454f08b2973c-utilities\") pod \"community-operators-bh68n\" (UID: \"3bf63648-6aaf-4a86-82a3-454f08b2973c\") " pod="openshift-marketplace/community-operators-bh68n" Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.340479 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf63648-6aaf-4a86-82a3-454f08b2973c-catalog-content\") pod \"community-operators-bh68n\" (UID: \"3bf63648-6aaf-4a86-82a3-454f08b2973c\") " pod="openshift-marketplace/community-operators-bh68n" Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.355300 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-rsdzb" podStartSLOduration=10.355281324 podStartE2EDuration="10.355281324s" podCreationTimestamp="2026-02-03 00:17:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:37.33452282 +0000 UTC m=+149.100512821" watchObservedRunningTime="2026-02-03 00:17:37.355281324 +0000 UTC m=+149.121271335" Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.366779 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.442421 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf63648-6aaf-4a86-82a3-454f08b2973c-catalog-content\") pod \"community-operators-bh68n\" (UID: \"3bf63648-6aaf-4a86-82a3-454f08b2973c\") " pod="openshift-marketplace/community-operators-bh68n" Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.442626 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tj27d\" (UniqueName: \"kubernetes.io/projected/3bf63648-6aaf-4a86-82a3-454f08b2973c-kube-api-access-tj27d\") pod \"community-operators-bh68n\" (UID: \"3bf63648-6aaf-4a86-82a3-454f08b2973c\") " pod="openshift-marketplace/community-operators-bh68n" Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.442894 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf63648-6aaf-4a86-82a3-454f08b2973c-utilities\") pod \"community-operators-bh68n\" (UID: \"3bf63648-6aaf-4a86-82a3-454f08b2973c\") " pod="openshift-marketplace/community-operators-bh68n" Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.443817 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf63648-6aaf-4a86-82a3-454f08b2973c-catalog-content\") pod \"community-operators-bh68n\" (UID: \"3bf63648-6aaf-4a86-82a3-454f08b2973c\") " pod="openshift-marketplace/community-operators-bh68n" Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.447377 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf63648-6aaf-4a86-82a3-454f08b2973c-utilities\") pod \"community-operators-bh68n\" (UID: \"3bf63648-6aaf-4a86-82a3-454f08b2973c\") " pod="openshift-marketplace/community-operators-bh68n" Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.455295 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.485749 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tj27d\" (UniqueName: \"kubernetes.io/projected/3bf63648-6aaf-4a86-82a3-454f08b2973c-kube-api-access-tj27d\") pod \"community-operators-bh68n\" (UID: \"3bf63648-6aaf-4a86-82a3-454f08b2973c\") " pod="openshift-marketplace/community-operators-bh68n" Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.533981 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9k8gx"] Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.558018 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bh68n" Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.763567 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wxw86"] Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.787631 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xs4wp"] Feb 03 00:17:37 crc kubenswrapper[4798]: I0203 00:17:37.817890 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bh68n"] Feb 03 00:17:38 crc kubenswrapper[4798]: I0203 00:17:38.160485 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-28wct" Feb 03 00:17:38 crc kubenswrapper[4798]: I0203 00:17:38.168132 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-28wct" Feb 03 00:17:38 crc kubenswrapper[4798]: I0203 00:17:38.272666 4798 patch_prober.go:28] interesting pod/router-default-5444994796-4c8fd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 00:17:38 crc kubenswrapper[4798]: [-]has-synced failed: reason withheld Feb 03 00:17:38 crc kubenswrapper[4798]: [+]process-running ok Feb 03 00:17:38 crc kubenswrapper[4798]: healthz check failed Feb 03 00:17:38 crc kubenswrapper[4798]: I0203 00:17:38.272736 4798 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4c8fd" podUID="a963b867-b963-4b9e-abe9-47088ff98bea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 00:17:38 crc kubenswrapper[4798]: I0203 00:17:38.331446 4798 generic.go:334] "Generic (PLEG): container finished" podID="3bf63648-6aaf-4a86-82a3-454f08b2973c" containerID="530244af6ab31fc40c4671306b698269a2d3bf162c808dfaf2d4278b96f87d26" exitCode=0 Feb 03 00:17:38 crc kubenswrapper[4798]: I0203 00:17:38.331539 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bh68n" event={"ID":"3bf63648-6aaf-4a86-82a3-454f08b2973c","Type":"ContainerDied","Data":"530244af6ab31fc40c4671306b698269a2d3bf162c808dfaf2d4278b96f87d26"} Feb 03 00:17:38 crc kubenswrapper[4798]: I0203 00:17:38.331605 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bh68n" event={"ID":"3bf63648-6aaf-4a86-82a3-454f08b2973c","Type":"ContainerStarted","Data":"23960e6591c6b6bf8faa4be79e88e7046df3ca810c6f531420e2923800a45f3d"} Feb 03 00:17:38 crc kubenswrapper[4798]: I0203 00:17:38.333000 4798 generic.go:334] "Generic (PLEG): container finished" podID="61992978-85f4-4395-b65a-d5efe47c79d8" containerID="643964eb28dbc6c77ee40bad868223d14c584ebc497de30687b4e892c7fd6d62" exitCode=0 Feb 03 00:17:38 crc kubenswrapper[4798]: I0203 00:17:38.333043 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4fw8" event={"ID":"61992978-85f4-4395-b65a-d5efe47c79d8","Type":"ContainerDied","Data":"643964eb28dbc6c77ee40bad868223d14c584ebc497de30687b4e892c7fd6d62"} Feb 03 00:17:38 crc kubenswrapper[4798]: I0203 00:17:38.333337 4798 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 00:17:38 crc kubenswrapper[4798]: I0203 00:17:38.335514 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" event={"ID":"691167fd-4218-4be3-bd41-39486e614ab4","Type":"ContainerStarted","Data":"3c092d178cf7e903597b862893bacdeae6c8534888c67e938bb262c70dfbf8e4"} Feb 03 00:17:38 crc kubenswrapper[4798]: I0203 00:17:38.335551 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" event={"ID":"691167fd-4218-4be3-bd41-39486e614ab4","Type":"ContainerStarted","Data":"3829c6cf8926f3f13edb1e121be0e242a84b742983f5ec9ef8d3033a0e34ec28"} Feb 03 00:17:38 crc kubenswrapper[4798]: I0203 00:17:38.335660 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:38 crc kubenswrapper[4798]: I0203 00:17:38.337231 4798 generic.go:334] "Generic (PLEG): container finished" podID="3d6b722e-bce7-4023-b30e-f9a460adb50c" containerID="94582cbb2f9c7d306973d75508d40bc9ae77f0de27329fbe9a264bc525c8fa04" exitCode=0 Feb 03 00:17:38 crc kubenswrapper[4798]: I0203 00:17:38.337314 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xs4wp" event={"ID":"3d6b722e-bce7-4023-b30e-f9a460adb50c","Type":"ContainerDied","Data":"94582cbb2f9c7d306973d75508d40bc9ae77f0de27329fbe9a264bc525c8fa04"} Feb 03 00:17:38 crc kubenswrapper[4798]: I0203 00:17:38.337362 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xs4wp" event={"ID":"3d6b722e-bce7-4023-b30e-f9a460adb50c","Type":"ContainerStarted","Data":"ab754a55c3fb967e8a64575d6f778a2fdb3403b7d468ae51705aa82e3e00c21f"} Feb 03 00:17:38 crc kubenswrapper[4798]: I0203 00:17:38.339220 4798 generic.go:334] "Generic (PLEG): container finished" podID="3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d" containerID="1d1cf0407b5d9cfc787d221ea4a0fdf27f66e8a86caaf4a8bef88beff7540d89" exitCode=0 Feb 03 00:17:38 crc kubenswrapper[4798]: I0203 00:17:38.339258 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9k8gx" event={"ID":"3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d","Type":"ContainerDied","Data":"1d1cf0407b5d9cfc787d221ea4a0fdf27f66e8a86caaf4a8bef88beff7540d89"} Feb 03 00:17:38 crc kubenswrapper[4798]: I0203 00:17:38.339295 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9k8gx" event={"ID":"3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d","Type":"ContainerStarted","Data":"e57d6801db1e20a2cae8ec8b07e1010ebc19ce9e51ed157dbc56ac05ae7fd8a4"} Feb 03 00:17:38 crc kubenswrapper[4798]: I0203 00:17:38.425586 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" podStartSLOduration=129.425565651 podStartE2EDuration="2m9.425565651s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:38.422752511 +0000 UTC m=+150.188742522" watchObservedRunningTime="2026-02-03 00:17:38.425565651 +0000 UTC m=+150.191555672" Feb 03 00:17:38 crc kubenswrapper[4798]: I0203 00:17:38.919244 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 03 00:17:38 crc kubenswrapper[4798]: I0203 00:17:38.927549 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jth29" Feb 03 00:17:38 crc kubenswrapper[4798]: I0203 00:17:38.980228 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kx75m"] Feb 03 00:17:38 crc kubenswrapper[4798]: I0203 00:17:38.981171 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kx75m" Feb 03 00:17:38 crc kubenswrapper[4798]: I0203 00:17:38.984532 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 03 00:17:38 crc kubenswrapper[4798]: I0203 00:17:38.997896 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kx75m"] Feb 03 00:17:39 crc kubenswrapper[4798]: I0203 00:17:39.074002 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e128d005-b5e0-4da0-b122-3f69a1751d1f-utilities\") pod \"redhat-marketplace-kx75m\" (UID: \"e128d005-b5e0-4da0-b122-3f69a1751d1f\") " pod="openshift-marketplace/redhat-marketplace-kx75m" Feb 03 00:17:39 crc kubenswrapper[4798]: I0203 00:17:39.074482 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn2vl\" (UniqueName: \"kubernetes.io/projected/e128d005-b5e0-4da0-b122-3f69a1751d1f-kube-api-access-gn2vl\") pod \"redhat-marketplace-kx75m\" (UID: \"e128d005-b5e0-4da0-b122-3f69a1751d1f\") " pod="openshift-marketplace/redhat-marketplace-kx75m" Feb 03 00:17:39 crc kubenswrapper[4798]: I0203 00:17:39.074516 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e128d005-b5e0-4da0-b122-3f69a1751d1f-catalog-content\") pod \"redhat-marketplace-kx75m\" (UID: \"e128d005-b5e0-4da0-b122-3f69a1751d1f\") " pod="openshift-marketplace/redhat-marketplace-kx75m" Feb 03 00:17:39 crc kubenswrapper[4798]: I0203 00:17:39.180404 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn2vl\" (UniqueName: \"kubernetes.io/projected/e128d005-b5e0-4da0-b122-3f69a1751d1f-kube-api-access-gn2vl\") pod \"redhat-marketplace-kx75m\" (UID: \"e128d005-b5e0-4da0-b122-3f69a1751d1f\") " pod="openshift-marketplace/redhat-marketplace-kx75m" Feb 03 00:17:39 crc kubenswrapper[4798]: I0203 00:17:39.180469 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e128d005-b5e0-4da0-b122-3f69a1751d1f-catalog-content\") pod \"redhat-marketplace-kx75m\" (UID: \"e128d005-b5e0-4da0-b122-3f69a1751d1f\") " pod="openshift-marketplace/redhat-marketplace-kx75m" Feb 03 00:17:39 crc kubenswrapper[4798]: I0203 00:17:39.180556 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e128d005-b5e0-4da0-b122-3f69a1751d1f-utilities\") pod \"redhat-marketplace-kx75m\" (UID: \"e128d005-b5e0-4da0-b122-3f69a1751d1f\") " pod="openshift-marketplace/redhat-marketplace-kx75m" Feb 03 00:17:39 crc kubenswrapper[4798]: I0203 00:17:39.181013 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e128d005-b5e0-4da0-b122-3f69a1751d1f-utilities\") pod \"redhat-marketplace-kx75m\" (UID: \"e128d005-b5e0-4da0-b122-3f69a1751d1f\") " pod="openshift-marketplace/redhat-marketplace-kx75m" Feb 03 00:17:39 crc kubenswrapper[4798]: I0203 00:17:39.181330 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e128d005-b5e0-4da0-b122-3f69a1751d1f-catalog-content\") pod \"redhat-marketplace-kx75m\" (UID: \"e128d005-b5e0-4da0-b122-3f69a1751d1f\") " pod="openshift-marketplace/redhat-marketplace-kx75m" Feb 03 00:17:39 crc kubenswrapper[4798]: I0203 00:17:39.216921 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn2vl\" (UniqueName: \"kubernetes.io/projected/e128d005-b5e0-4da0-b122-3f69a1751d1f-kube-api-access-gn2vl\") pod \"redhat-marketplace-kx75m\" (UID: \"e128d005-b5e0-4da0-b122-3f69a1751d1f\") " pod="openshift-marketplace/redhat-marketplace-kx75m" Feb 03 00:17:39 crc kubenswrapper[4798]: I0203 00:17:39.275941 4798 patch_prober.go:28] interesting pod/router-default-5444994796-4c8fd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 00:17:39 crc kubenswrapper[4798]: [-]has-synced failed: reason withheld Feb 03 00:17:39 crc kubenswrapper[4798]: [+]process-running ok Feb 03 00:17:39 crc kubenswrapper[4798]: healthz check failed Feb 03 00:17:39 crc kubenswrapper[4798]: I0203 00:17:39.275994 4798 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4c8fd" podUID="a963b867-b963-4b9e-abe9-47088ff98bea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 00:17:39 crc kubenswrapper[4798]: I0203 00:17:39.299704 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kx75m" Feb 03 00:17:39 crc kubenswrapper[4798]: I0203 00:17:39.361058 4798 generic.go:334] "Generic (PLEG): container finished" podID="9ba81b6e-6d73-4dc4-90ba-4690b971f882" containerID="84103a591dfafbb75f0325730f44182cf836a40c65b1a67c9216841674590efc" exitCode=0 Feb 03 00:17:39 crc kubenswrapper[4798]: I0203 00:17:39.361696 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501295-v5q6w" event={"ID":"9ba81b6e-6d73-4dc4-90ba-4690b971f882","Type":"ContainerDied","Data":"84103a591dfafbb75f0325730f44182cf836a40c65b1a67c9216841674590efc"} Feb 03 00:17:39 crc kubenswrapper[4798]: I0203 00:17:39.390264 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b9n2p"] Feb 03 00:17:39 crc kubenswrapper[4798]: I0203 00:17:39.391624 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b9n2p" Feb 03 00:17:39 crc kubenswrapper[4798]: I0203 00:17:39.427144 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b9n2p"] Feb 03 00:17:39 crc kubenswrapper[4798]: I0203 00:17:39.487349 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f52ff0d-ea7f-4dba-80e3-98c88d78adbf-catalog-content\") pod \"redhat-marketplace-b9n2p\" (UID: \"0f52ff0d-ea7f-4dba-80e3-98c88d78adbf\") " pod="openshift-marketplace/redhat-marketplace-b9n2p" Feb 03 00:17:39 crc kubenswrapper[4798]: I0203 00:17:39.487472 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp8r2\" (UniqueName: \"kubernetes.io/projected/0f52ff0d-ea7f-4dba-80e3-98c88d78adbf-kube-api-access-dp8r2\") pod \"redhat-marketplace-b9n2p\" (UID: \"0f52ff0d-ea7f-4dba-80e3-98c88d78adbf\") " pod="openshift-marketplace/redhat-marketplace-b9n2p" Feb 03 00:17:39 crc kubenswrapper[4798]: I0203 00:17:39.487521 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f52ff0d-ea7f-4dba-80e3-98c88d78adbf-utilities\") pod \"redhat-marketplace-b9n2p\" (UID: \"0f52ff0d-ea7f-4dba-80e3-98c88d78adbf\") " pod="openshift-marketplace/redhat-marketplace-b9n2p" Feb 03 00:17:39 crc kubenswrapper[4798]: I0203 00:17:39.589165 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp8r2\" (UniqueName: \"kubernetes.io/projected/0f52ff0d-ea7f-4dba-80e3-98c88d78adbf-kube-api-access-dp8r2\") pod \"redhat-marketplace-b9n2p\" (UID: \"0f52ff0d-ea7f-4dba-80e3-98c88d78adbf\") " pod="openshift-marketplace/redhat-marketplace-b9n2p" Feb 03 00:17:39 crc kubenswrapper[4798]: I0203 00:17:39.589531 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f52ff0d-ea7f-4dba-80e3-98c88d78adbf-utilities\") pod \"redhat-marketplace-b9n2p\" (UID: \"0f52ff0d-ea7f-4dba-80e3-98c88d78adbf\") " pod="openshift-marketplace/redhat-marketplace-b9n2p" Feb 03 00:17:39 crc kubenswrapper[4798]: I0203 00:17:39.589664 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f52ff0d-ea7f-4dba-80e3-98c88d78adbf-catalog-content\") pod \"redhat-marketplace-b9n2p\" (UID: \"0f52ff0d-ea7f-4dba-80e3-98c88d78adbf\") " pod="openshift-marketplace/redhat-marketplace-b9n2p" Feb 03 00:17:39 crc kubenswrapper[4798]: I0203 00:17:39.590255 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f52ff0d-ea7f-4dba-80e3-98c88d78adbf-utilities\") pod \"redhat-marketplace-b9n2p\" (UID: \"0f52ff0d-ea7f-4dba-80e3-98c88d78adbf\") " pod="openshift-marketplace/redhat-marketplace-b9n2p" Feb 03 00:17:39 crc kubenswrapper[4798]: I0203 00:17:39.590344 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f52ff0d-ea7f-4dba-80e3-98c88d78adbf-catalog-content\") pod \"redhat-marketplace-b9n2p\" (UID: \"0f52ff0d-ea7f-4dba-80e3-98c88d78adbf\") " pod="openshift-marketplace/redhat-marketplace-b9n2p" Feb 03 00:17:39 crc kubenswrapper[4798]: I0203 00:17:39.614579 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp8r2\" (UniqueName: \"kubernetes.io/projected/0f52ff0d-ea7f-4dba-80e3-98c88d78adbf-kube-api-access-dp8r2\") pod \"redhat-marketplace-b9n2p\" (UID: \"0f52ff0d-ea7f-4dba-80e3-98c88d78adbf\") " pod="openshift-marketplace/redhat-marketplace-b9n2p" Feb 03 00:17:39 crc kubenswrapper[4798]: I0203 00:17:39.622814 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-mqqj9" Feb 03 00:17:39 crc kubenswrapper[4798]: I0203 00:17:39.622870 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-mqqj9" Feb 03 00:17:39 crc kubenswrapper[4798]: I0203 00:17:39.624646 4798 patch_prober.go:28] interesting pod/console-f9d7485db-mqqj9 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Feb 03 00:17:39 crc kubenswrapper[4798]: I0203 00:17:39.624732 4798 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-mqqj9" podUID="dcaaff4a-ac3a-48dd-ab95-5ccf62abda5f" containerName="console" probeResult="failure" output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" Feb 03 00:17:39 crc kubenswrapper[4798]: I0203 00:17:39.711316 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b9n2p" Feb 03 00:17:39 crc kubenswrapper[4798]: I0203 00:17:39.787883 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lp6cb"] Feb 03 00:17:39 crc kubenswrapper[4798]: I0203 00:17:39.789018 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lp6cb" Feb 03 00:17:39 crc kubenswrapper[4798]: I0203 00:17:39.790909 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 03 00:17:39 crc kubenswrapper[4798]: I0203 00:17:39.798301 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lp6cb"] Feb 03 00:17:39 crc kubenswrapper[4798]: I0203 00:17:39.819148 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kx75m"] Feb 03 00:17:39 crc kubenswrapper[4798]: I0203 00:17:39.867439 4798 patch_prober.go:28] interesting pod/downloads-7954f5f757-k8pqq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 03 00:17:39 crc kubenswrapper[4798]: I0203 00:17:39.867491 4798 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k8pqq" podUID="e80fb785-eead-4ebd-9e0c-0f5c548c257c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 03 00:17:39 crc kubenswrapper[4798]: I0203 00:17:39.867775 4798 patch_prober.go:28] interesting pod/downloads-7954f5f757-k8pqq container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 03 00:17:39 crc kubenswrapper[4798]: I0203 00:17:39.867828 4798 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-k8pqq" podUID="e80fb785-eead-4ebd-9e0c-0f5c548c257c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 03 00:17:39 crc kubenswrapper[4798]: W0203 00:17:39.892633 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode128d005_b5e0_4da0_b122_3f69a1751d1f.slice/crio-d69179891d7daf30e66f00ddaff76d07681ae7681d087e7d34c19d9995c99321 WatchSource:0}: Error finding container d69179891d7daf30e66f00ddaff76d07681ae7681d087e7d34c19d9995c99321: Status 404 returned error can't find the container with id d69179891d7daf30e66f00ddaff76d07681ae7681d087e7d34c19d9995c99321 Feb 03 00:17:39 crc kubenswrapper[4798]: I0203 00:17:39.899923 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fafb100-14c8-437f-b5ac-4264b4cbef55-catalog-content\") pod \"redhat-operators-lp6cb\" (UID: \"1fafb100-14c8-437f-b5ac-4264b4cbef55\") " pod="openshift-marketplace/redhat-operators-lp6cb" Feb 03 00:17:39 crc kubenswrapper[4798]: I0203 00:17:39.899963 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzj6f\" (UniqueName: \"kubernetes.io/projected/1fafb100-14c8-437f-b5ac-4264b4cbef55-kube-api-access-zzj6f\") pod \"redhat-operators-lp6cb\" (UID: \"1fafb100-14c8-437f-b5ac-4264b4cbef55\") " pod="openshift-marketplace/redhat-operators-lp6cb" Feb 03 00:17:39 crc kubenswrapper[4798]: I0203 00:17:39.899998 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fafb100-14c8-437f-b5ac-4264b4cbef55-utilities\") pod \"redhat-operators-lp6cb\" (UID: \"1fafb100-14c8-437f-b5ac-4264b4cbef55\") " pod="openshift-marketplace/redhat-operators-lp6cb" Feb 03 00:17:39 crc kubenswrapper[4798]: I0203 00:17:39.989730 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k99wq"] Feb 03 00:17:40 crc kubenswrapper[4798]: I0203 00:17:39.992931 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k99wq" Feb 03 00:17:40 crc kubenswrapper[4798]: I0203 00:17:39.998929 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k99wq"] Feb 03 00:17:40 crc kubenswrapper[4798]: I0203 00:17:40.001274 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fafb100-14c8-437f-b5ac-4264b4cbef55-catalog-content\") pod \"redhat-operators-lp6cb\" (UID: \"1fafb100-14c8-437f-b5ac-4264b4cbef55\") " pod="openshift-marketplace/redhat-operators-lp6cb" Feb 03 00:17:40 crc kubenswrapper[4798]: I0203 00:17:40.001303 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzj6f\" (UniqueName: \"kubernetes.io/projected/1fafb100-14c8-437f-b5ac-4264b4cbef55-kube-api-access-zzj6f\") pod \"redhat-operators-lp6cb\" (UID: \"1fafb100-14c8-437f-b5ac-4264b4cbef55\") " pod="openshift-marketplace/redhat-operators-lp6cb" Feb 03 00:17:40 crc kubenswrapper[4798]: I0203 00:17:40.001342 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fafb100-14c8-437f-b5ac-4264b4cbef55-utilities\") pod \"redhat-operators-lp6cb\" (UID: \"1fafb100-14c8-437f-b5ac-4264b4cbef55\") " pod="openshift-marketplace/redhat-operators-lp6cb" Feb 03 00:17:40 crc kubenswrapper[4798]: I0203 00:17:40.003265 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fafb100-14c8-437f-b5ac-4264b4cbef55-catalog-content\") pod \"redhat-operators-lp6cb\" (UID: \"1fafb100-14c8-437f-b5ac-4264b4cbef55\") " pod="openshift-marketplace/redhat-operators-lp6cb" Feb 03 00:17:40 crc kubenswrapper[4798]: I0203 00:17:40.003568 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fafb100-14c8-437f-b5ac-4264b4cbef55-utilities\") pod \"redhat-operators-lp6cb\" (UID: \"1fafb100-14c8-437f-b5ac-4264b4cbef55\") " pod="openshift-marketplace/redhat-operators-lp6cb" Feb 03 00:17:40 crc kubenswrapper[4798]: I0203 00:17:40.040939 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzj6f\" (UniqueName: \"kubernetes.io/projected/1fafb100-14c8-437f-b5ac-4264b4cbef55-kube-api-access-zzj6f\") pod \"redhat-operators-lp6cb\" (UID: \"1fafb100-14c8-437f-b5ac-4264b4cbef55\") " pod="openshift-marketplace/redhat-operators-lp6cb" Feb 03 00:17:40 crc kubenswrapper[4798]: I0203 00:17:40.106420 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a1ac91d-6785-452c-8361-cc26d6ff6235-catalog-content\") pod \"redhat-operators-k99wq\" (UID: \"9a1ac91d-6785-452c-8361-cc26d6ff6235\") " pod="openshift-marketplace/redhat-operators-k99wq" Feb 03 00:17:40 crc kubenswrapper[4798]: I0203 00:17:40.114117 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a1ac91d-6785-452c-8361-cc26d6ff6235-utilities\") pod \"redhat-operators-k99wq\" (UID: \"9a1ac91d-6785-452c-8361-cc26d6ff6235\") " pod="openshift-marketplace/redhat-operators-k99wq" Feb 03 00:17:40 crc kubenswrapper[4798]: I0203 00:17:40.114558 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn479\" (UniqueName: \"kubernetes.io/projected/9a1ac91d-6785-452c-8361-cc26d6ff6235-kube-api-access-cn479\") pod \"redhat-operators-k99wq\" (UID: \"9a1ac91d-6785-452c-8361-cc26d6ff6235\") " pod="openshift-marketplace/redhat-operators-k99wq" Feb 03 00:17:40 crc kubenswrapper[4798]: I0203 00:17:40.137118 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lp6cb" Feb 03 00:17:40 crc kubenswrapper[4798]: I0203 00:17:40.173949 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b9n2p"] Feb 03 00:17:40 crc kubenswrapper[4798]: W0203 00:17:40.188502 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f52ff0d_ea7f_4dba_80e3_98c88d78adbf.slice/crio-7bf5660ff22b7fc12f00f4cf039322c2d1ef7dc99ffd4dc57f488ed700c5555d WatchSource:0}: Error finding container 7bf5660ff22b7fc12f00f4cf039322c2d1ef7dc99ffd4dc57f488ed700c5555d: Status 404 returned error can't find the container with id 7bf5660ff22b7fc12f00f4cf039322c2d1ef7dc99ffd4dc57f488ed700c5555d Feb 03 00:17:40 crc kubenswrapper[4798]: I0203 00:17:40.215435 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn479\" (UniqueName: \"kubernetes.io/projected/9a1ac91d-6785-452c-8361-cc26d6ff6235-kube-api-access-cn479\") pod \"redhat-operators-k99wq\" (UID: \"9a1ac91d-6785-452c-8361-cc26d6ff6235\") " pod="openshift-marketplace/redhat-operators-k99wq" Feb 03 00:17:40 crc kubenswrapper[4798]: I0203 00:17:40.215522 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a1ac91d-6785-452c-8361-cc26d6ff6235-catalog-content\") pod \"redhat-operators-k99wq\" (UID: \"9a1ac91d-6785-452c-8361-cc26d6ff6235\") " pod="openshift-marketplace/redhat-operators-k99wq" Feb 03 00:17:40 crc kubenswrapper[4798]: I0203 00:17:40.215545 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a1ac91d-6785-452c-8361-cc26d6ff6235-utilities\") pod \"redhat-operators-k99wq\" (UID: \"9a1ac91d-6785-452c-8361-cc26d6ff6235\") " pod="openshift-marketplace/redhat-operators-k99wq" Feb 03 00:17:40 crc kubenswrapper[4798]: I0203 00:17:40.216040 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a1ac91d-6785-452c-8361-cc26d6ff6235-utilities\") pod \"redhat-operators-k99wq\" (UID: \"9a1ac91d-6785-452c-8361-cc26d6ff6235\") " pod="openshift-marketplace/redhat-operators-k99wq" Feb 03 00:17:40 crc kubenswrapper[4798]: I0203 00:17:40.216588 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a1ac91d-6785-452c-8361-cc26d6ff6235-catalog-content\") pod \"redhat-operators-k99wq\" (UID: \"9a1ac91d-6785-452c-8361-cc26d6ff6235\") " pod="openshift-marketplace/redhat-operators-k99wq" Feb 03 00:17:40 crc kubenswrapper[4798]: I0203 00:17:40.265526 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn479\" (UniqueName: \"kubernetes.io/projected/9a1ac91d-6785-452c-8361-cc26d6ff6235-kube-api-access-cn479\") pod \"redhat-operators-k99wq\" (UID: \"9a1ac91d-6785-452c-8361-cc26d6ff6235\") " pod="openshift-marketplace/redhat-operators-k99wq" Feb 03 00:17:40 crc kubenswrapper[4798]: I0203 00:17:40.268023 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-4c8fd" Feb 03 00:17:40 crc kubenswrapper[4798]: I0203 00:17:40.275852 4798 patch_prober.go:28] interesting pod/router-default-5444994796-4c8fd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 00:17:40 crc kubenswrapper[4798]: [-]has-synced failed: reason withheld Feb 03 00:17:40 crc kubenswrapper[4798]: [+]process-running ok Feb 03 00:17:40 crc kubenswrapper[4798]: healthz check failed Feb 03 00:17:40 crc kubenswrapper[4798]: I0203 00:17:40.276174 4798 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4c8fd" podUID="a963b867-b963-4b9e-abe9-47088ff98bea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 00:17:40 crc kubenswrapper[4798]: I0203 00:17:40.350174 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k99wq" Feb 03 00:17:40 crc kubenswrapper[4798]: I0203 00:17:40.361149 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 03 00:17:40 crc kubenswrapper[4798]: I0203 00:17:40.361868 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 00:17:40 crc kubenswrapper[4798]: I0203 00:17:40.374147 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 03 00:17:40 crc kubenswrapper[4798]: I0203 00:17:40.374326 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 03 00:17:40 crc kubenswrapper[4798]: I0203 00:17:40.374513 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 03 00:17:40 crc kubenswrapper[4798]: I0203 00:17:40.407470 4798 generic.go:334] "Generic (PLEG): container finished" podID="e128d005-b5e0-4da0-b122-3f69a1751d1f" containerID="9846eb94387d924a22f3c7a3c7f1d6ebf40273d824327a5642d2d379ed6c83f0" exitCode=0 Feb 03 00:17:40 crc kubenswrapper[4798]: I0203 00:17:40.407581 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kx75m" event={"ID":"e128d005-b5e0-4da0-b122-3f69a1751d1f","Type":"ContainerDied","Data":"9846eb94387d924a22f3c7a3c7f1d6ebf40273d824327a5642d2d379ed6c83f0"} Feb 03 00:17:40 crc kubenswrapper[4798]: I0203 00:17:40.407621 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kx75m" event={"ID":"e128d005-b5e0-4da0-b122-3f69a1751d1f","Type":"ContainerStarted","Data":"d69179891d7daf30e66f00ddaff76d07681ae7681d087e7d34c19d9995c99321"} Feb 03 00:17:40 crc kubenswrapper[4798]: I0203 00:17:40.418623 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d8ef6321-c907-4111-861b-6bb603ed0342-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d8ef6321-c907-4111-861b-6bb603ed0342\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 00:17:40 crc kubenswrapper[4798]: I0203 00:17:40.418795 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d8ef6321-c907-4111-861b-6bb603ed0342-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d8ef6321-c907-4111-861b-6bb603ed0342\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 00:17:40 crc kubenswrapper[4798]: I0203 00:17:40.418960 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b9n2p" event={"ID":"0f52ff0d-ea7f-4dba-80e3-98c88d78adbf","Type":"ContainerStarted","Data":"7bf5660ff22b7fc12f00f4cf039322c2d1ef7dc99ffd4dc57f488ed700c5555d"} Feb 03 00:17:40 crc kubenswrapper[4798]: I0203 00:17:40.520291 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d8ef6321-c907-4111-861b-6bb603ed0342-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d8ef6321-c907-4111-861b-6bb603ed0342\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 00:17:40 crc kubenswrapper[4798]: I0203 00:17:40.520571 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d8ef6321-c907-4111-861b-6bb603ed0342-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d8ef6321-c907-4111-861b-6bb603ed0342\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 00:17:40 crc kubenswrapper[4798]: I0203 00:17:40.520678 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d8ef6321-c907-4111-861b-6bb603ed0342-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d8ef6321-c907-4111-861b-6bb603ed0342\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 00:17:40 crc kubenswrapper[4798]: I0203 00:17:40.543019 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d8ef6321-c907-4111-861b-6bb603ed0342-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d8ef6321-c907-4111-861b-6bb603ed0342\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 00:17:40 crc kubenswrapper[4798]: I0203 00:17:40.621611 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lp6cb"] Feb 03 00:17:40 crc kubenswrapper[4798]: I0203 00:17:40.649524 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-9xz4k" Feb 03 00:17:40 crc kubenswrapper[4798]: I0203 00:17:40.760127 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 00:17:40 crc kubenswrapper[4798]: I0203 00:17:40.972002 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k99wq"] Feb 03 00:17:41 crc kubenswrapper[4798]: I0203 00:17:41.031257 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501295-v5q6w" Feb 03 00:17:41 crc kubenswrapper[4798]: I0203 00:17:41.107718 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 03 00:17:41 crc kubenswrapper[4798]: I0203 00:17:41.132951 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ba81b6e-6d73-4dc4-90ba-4690b971f882-config-volume\") pod \"9ba81b6e-6d73-4dc4-90ba-4690b971f882\" (UID: \"9ba81b6e-6d73-4dc4-90ba-4690b971f882\") " Feb 03 00:17:41 crc kubenswrapper[4798]: I0203 00:17:41.133024 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ba81b6e-6d73-4dc4-90ba-4690b971f882-secret-volume\") pod \"9ba81b6e-6d73-4dc4-90ba-4690b971f882\" (UID: \"9ba81b6e-6d73-4dc4-90ba-4690b971f882\") " Feb 03 00:17:41 crc kubenswrapper[4798]: I0203 00:17:41.133079 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8snh8\" (UniqueName: \"kubernetes.io/projected/9ba81b6e-6d73-4dc4-90ba-4690b971f882-kube-api-access-8snh8\") pod \"9ba81b6e-6d73-4dc4-90ba-4690b971f882\" (UID: \"9ba81b6e-6d73-4dc4-90ba-4690b971f882\") " Feb 03 00:17:41 crc kubenswrapper[4798]: I0203 00:17:41.133597 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ba81b6e-6d73-4dc4-90ba-4690b971f882-config-volume" (OuterVolumeSpecName: "config-volume") pod "9ba81b6e-6d73-4dc4-90ba-4690b971f882" (UID: "9ba81b6e-6d73-4dc4-90ba-4690b971f882"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:17:41 crc kubenswrapper[4798]: I0203 00:17:41.137969 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ba81b6e-6d73-4dc4-90ba-4690b971f882-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9ba81b6e-6d73-4dc4-90ba-4690b971f882" (UID: "9ba81b6e-6d73-4dc4-90ba-4690b971f882"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:17:41 crc kubenswrapper[4798]: I0203 00:17:41.138149 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ba81b6e-6d73-4dc4-90ba-4690b971f882-kube-api-access-8snh8" (OuterVolumeSpecName: "kube-api-access-8snh8") pod "9ba81b6e-6d73-4dc4-90ba-4690b971f882" (UID: "9ba81b6e-6d73-4dc4-90ba-4690b971f882"). InnerVolumeSpecName "kube-api-access-8snh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:17:41 crc kubenswrapper[4798]: W0203 00:17:41.171371 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podd8ef6321_c907_4111_861b_6bb603ed0342.slice/crio-41775c646e086d4a261afa59b7ebda5498b8ecbf0d62d76e4071e01e3562615e WatchSource:0}: Error finding container 41775c646e086d4a261afa59b7ebda5498b8ecbf0d62d76e4071e01e3562615e: Status 404 returned error can't find the container with id 41775c646e086d4a261afa59b7ebda5498b8ecbf0d62d76e4071e01e3562615e Feb 03 00:17:41 crc kubenswrapper[4798]: I0203 00:17:41.243069 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8snh8\" (UniqueName: \"kubernetes.io/projected/9ba81b6e-6d73-4dc4-90ba-4690b971f882-kube-api-access-8snh8\") on node \"crc\" DevicePath \"\"" Feb 03 00:17:41 crc kubenswrapper[4798]: I0203 00:17:41.243431 4798 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ba81b6e-6d73-4dc4-90ba-4690b971f882-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 00:17:41 crc kubenswrapper[4798]: I0203 00:17:41.243445 4798 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ba81b6e-6d73-4dc4-90ba-4690b971f882-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 03 00:17:41 crc kubenswrapper[4798]: I0203 00:17:41.274882 4798 patch_prober.go:28] interesting pod/router-default-5444994796-4c8fd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 00:17:41 crc kubenswrapper[4798]: [-]has-synced failed: reason withheld Feb 03 00:17:41 crc kubenswrapper[4798]: [+]process-running ok Feb 03 00:17:41 crc kubenswrapper[4798]: healthz check failed Feb 03 00:17:41 crc kubenswrapper[4798]: I0203 00:17:41.274937 4798 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4c8fd" podUID="a963b867-b963-4b9e-abe9-47088ff98bea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 00:17:41 crc kubenswrapper[4798]: I0203 00:17:41.441629 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d8ef6321-c907-4111-861b-6bb603ed0342","Type":"ContainerStarted","Data":"41775c646e086d4a261afa59b7ebda5498b8ecbf0d62d76e4071e01e3562615e"} Feb 03 00:17:41 crc kubenswrapper[4798]: I0203 00:17:41.448495 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k99wq" event={"ID":"9a1ac91d-6785-452c-8361-cc26d6ff6235","Type":"ContainerStarted","Data":"84e71c3c16ae16ddc7797e327d7f5927025ba35a96a7cf31b391a996c123b979"} Feb 03 00:17:41 crc kubenswrapper[4798]: I0203 00:17:41.448538 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k99wq" event={"ID":"9a1ac91d-6785-452c-8361-cc26d6ff6235","Type":"ContainerStarted","Data":"6a21a1b212afdd4ae2bec2d93013e98425e1b24e4505e110ad93f1e155c29052"} Feb 03 00:17:41 crc kubenswrapper[4798]: I0203 00:17:41.461292 4798 generic.go:334] "Generic (PLEG): container finished" podID="0f52ff0d-ea7f-4dba-80e3-98c88d78adbf" containerID="3d3a6e9073c5adf2b5279cd1766eac54967c7a17ace647b0e4b2ef315257a5bc" exitCode=0 Feb 03 00:17:41 crc kubenswrapper[4798]: I0203 00:17:41.461361 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b9n2p" event={"ID":"0f52ff0d-ea7f-4dba-80e3-98c88d78adbf","Type":"ContainerDied","Data":"3d3a6e9073c5adf2b5279cd1766eac54967c7a17ace647b0e4b2ef315257a5bc"} Feb 03 00:17:41 crc kubenswrapper[4798]: I0203 00:17:41.464346 4798 generic.go:334] "Generic (PLEG): container finished" podID="1fafb100-14c8-437f-b5ac-4264b4cbef55" containerID="de982e199df10afe287521d4e31e9554a7dd6ebebdc20eb31c98a3c7e1566213" exitCode=0 Feb 03 00:17:41 crc kubenswrapper[4798]: I0203 00:17:41.464391 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lp6cb" event={"ID":"1fafb100-14c8-437f-b5ac-4264b4cbef55","Type":"ContainerDied","Data":"de982e199df10afe287521d4e31e9554a7dd6ebebdc20eb31c98a3c7e1566213"} Feb 03 00:17:41 crc kubenswrapper[4798]: I0203 00:17:41.464410 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lp6cb" event={"ID":"1fafb100-14c8-437f-b5ac-4264b4cbef55","Type":"ContainerStarted","Data":"5cb1d0b4b08968d80788267df7c97ca1ca7495d4654febec0fcf91eadacc569f"} Feb 03 00:17:41 crc kubenswrapper[4798]: I0203 00:17:41.480208 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501295-v5q6w" event={"ID":"9ba81b6e-6d73-4dc4-90ba-4690b971f882","Type":"ContainerDied","Data":"689b0982bfd0220fa8f0868ffb9d0b5a5d11fa9ebbaf2ac27f8070b4c62dd67b"} Feb 03 00:17:41 crc kubenswrapper[4798]: I0203 00:17:41.480241 4798 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="689b0982bfd0220fa8f0868ffb9d0b5a5d11fa9ebbaf2ac27f8070b4c62dd67b" Feb 03 00:17:41 crc kubenswrapper[4798]: I0203 00:17:41.480344 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501295-v5q6w" Feb 03 00:17:42 crc kubenswrapper[4798]: I0203 00:17:42.271548 4798 patch_prober.go:28] interesting pod/router-default-5444994796-4c8fd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 00:17:42 crc kubenswrapper[4798]: [-]has-synced failed: reason withheld Feb 03 00:17:42 crc kubenswrapper[4798]: [+]process-running ok Feb 03 00:17:42 crc kubenswrapper[4798]: healthz check failed Feb 03 00:17:42 crc kubenswrapper[4798]: I0203 00:17:42.271613 4798 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4c8fd" podUID="a963b867-b963-4b9e-abe9-47088ff98bea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 00:17:42 crc kubenswrapper[4798]: I0203 00:17:42.467926 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 03 00:17:42 crc kubenswrapper[4798]: E0203 00:17:42.468498 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ba81b6e-6d73-4dc4-90ba-4690b971f882" containerName="collect-profiles" Feb 03 00:17:42 crc kubenswrapper[4798]: I0203 00:17:42.468514 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ba81b6e-6d73-4dc4-90ba-4690b971f882" containerName="collect-profiles" Feb 03 00:17:42 crc kubenswrapper[4798]: I0203 00:17:42.468673 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ba81b6e-6d73-4dc4-90ba-4690b971f882" containerName="collect-profiles" Feb 03 00:17:42 crc kubenswrapper[4798]: I0203 00:17:42.469127 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 00:17:42 crc kubenswrapper[4798]: I0203 00:17:42.475732 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 03 00:17:42 crc kubenswrapper[4798]: I0203 00:17:42.476304 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 03 00:17:42 crc kubenswrapper[4798]: I0203 00:17:42.476483 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 03 00:17:42 crc kubenswrapper[4798]: I0203 00:17:42.511570 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d8ef6321-c907-4111-861b-6bb603ed0342","Type":"ContainerStarted","Data":"4c89b933c8037cdbcba20d357958a01d293fd9fc5650fe38222bbe04e1b25637"} Feb 03 00:17:42 crc kubenswrapper[4798]: I0203 00:17:42.513167 4798 generic.go:334] "Generic (PLEG): container finished" podID="9a1ac91d-6785-452c-8361-cc26d6ff6235" containerID="84e71c3c16ae16ddc7797e327d7f5927025ba35a96a7cf31b391a996c123b979" exitCode=0 Feb 03 00:17:42 crc kubenswrapper[4798]: I0203 00:17:42.513208 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k99wq" event={"ID":"9a1ac91d-6785-452c-8361-cc26d6ff6235","Type":"ContainerDied","Data":"84e71c3c16ae16ddc7797e327d7f5927025ba35a96a7cf31b391a996c123b979"} Feb 03 00:17:42 crc kubenswrapper[4798]: I0203 00:17:42.527135 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.5271191650000002 podStartE2EDuration="2.527119165s" podCreationTimestamp="2026-02-03 00:17:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:42.52451446 +0000 UTC m=+154.290504471" watchObservedRunningTime="2026-02-03 00:17:42.527119165 +0000 UTC m=+154.293109176" Feb 03 00:17:42 crc kubenswrapper[4798]: I0203 00:17:42.566793 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2455bee-e1d1-4ced-95fc-f8deb60db02b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e2455bee-e1d1-4ced-95fc-f8deb60db02b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 00:17:42 crc kubenswrapper[4798]: I0203 00:17:42.566857 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2455bee-e1d1-4ced-95fc-f8deb60db02b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e2455bee-e1d1-4ced-95fc-f8deb60db02b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 00:17:42 crc kubenswrapper[4798]: I0203 00:17:42.668822 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2455bee-e1d1-4ced-95fc-f8deb60db02b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e2455bee-e1d1-4ced-95fc-f8deb60db02b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 00:17:42 crc kubenswrapper[4798]: I0203 00:17:42.668918 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2455bee-e1d1-4ced-95fc-f8deb60db02b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e2455bee-e1d1-4ced-95fc-f8deb60db02b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 00:17:42 crc kubenswrapper[4798]: I0203 00:17:42.669067 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2455bee-e1d1-4ced-95fc-f8deb60db02b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e2455bee-e1d1-4ced-95fc-f8deb60db02b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 00:17:42 crc kubenswrapper[4798]: I0203 00:17:42.685277 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2455bee-e1d1-4ced-95fc-f8deb60db02b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e2455bee-e1d1-4ced-95fc-f8deb60db02b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 00:17:42 crc kubenswrapper[4798]: I0203 00:17:42.790671 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 00:17:43 crc kubenswrapper[4798]: I0203 00:17:43.034172 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 03 00:17:43 crc kubenswrapper[4798]: W0203 00:17:43.071011 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode2455bee_e1d1_4ced_95fc_f8deb60db02b.slice/crio-a4e8ed5fceb276875cf17e09a8b174fdac7f0d0e36b9263c9655c888f9a745aa WatchSource:0}: Error finding container a4e8ed5fceb276875cf17e09a8b174fdac7f0d0e36b9263c9655c888f9a745aa: Status 404 returned error can't find the container with id a4e8ed5fceb276875cf17e09a8b174fdac7f0d0e36b9263c9655c888f9a745aa Feb 03 00:17:43 crc kubenswrapper[4798]: I0203 00:17:43.285439 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-4c8fd" Feb 03 00:17:43 crc kubenswrapper[4798]: I0203 00:17:43.288730 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-4c8fd" Feb 03 00:17:43 crc kubenswrapper[4798]: I0203 00:17:43.527935 4798 generic.go:334] "Generic (PLEG): container finished" podID="d8ef6321-c907-4111-861b-6bb603ed0342" containerID="4c89b933c8037cdbcba20d357958a01d293fd9fc5650fe38222bbe04e1b25637" exitCode=0 Feb 03 00:17:43 crc kubenswrapper[4798]: I0203 00:17:43.528135 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d8ef6321-c907-4111-861b-6bb603ed0342","Type":"ContainerDied","Data":"4c89b933c8037cdbcba20d357958a01d293fd9fc5650fe38222bbe04e1b25637"} Feb 03 00:17:43 crc kubenswrapper[4798]: I0203 00:17:43.531210 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e2455bee-e1d1-4ced-95fc-f8deb60db02b","Type":"ContainerStarted","Data":"a4e8ed5fceb276875cf17e09a8b174fdac7f0d0e36b9263c9655c888f9a745aa"} Feb 03 00:17:43 crc kubenswrapper[4798]: I0203 00:17:43.867842 4798 patch_prober.go:28] interesting pod/machine-config-daemon-b842j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 00:17:43 crc kubenswrapper[4798]: I0203 00:17:43.867889 4798 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b842j" podUID="c6602c86-f236-4772-b70f-a8b4847b95dd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 00:17:44 crc kubenswrapper[4798]: I0203 00:17:44.547903 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e2455bee-e1d1-4ced-95fc-f8deb60db02b","Type":"ContainerStarted","Data":"800fe24b83abc23f3fb134de39ae78c410545195bfb62679fd70811f408d37ae"} Feb 03 00:17:44 crc kubenswrapper[4798]: I0203 00:17:44.921273 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 00:17:44 crc kubenswrapper[4798]: I0203 00:17:44.937982 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.93796522 podStartE2EDuration="2.93796522s" podCreationTimestamp="2026-02-03 00:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:17:44.572271587 +0000 UTC m=+156.338261608" watchObservedRunningTime="2026-02-03 00:17:44.93796522 +0000 UTC m=+156.703955231" Feb 03 00:17:45 crc kubenswrapper[4798]: I0203 00:17:45.019857 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d8ef6321-c907-4111-861b-6bb603ed0342-kubelet-dir\") pod \"d8ef6321-c907-4111-861b-6bb603ed0342\" (UID: \"d8ef6321-c907-4111-861b-6bb603ed0342\") " Feb 03 00:17:45 crc kubenswrapper[4798]: I0203 00:17:45.020019 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d8ef6321-c907-4111-861b-6bb603ed0342-kube-api-access\") pod \"d8ef6321-c907-4111-861b-6bb603ed0342\" (UID: \"d8ef6321-c907-4111-861b-6bb603ed0342\") " Feb 03 00:17:45 crc kubenswrapper[4798]: I0203 00:17:45.019998 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8ef6321-c907-4111-861b-6bb603ed0342-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d8ef6321-c907-4111-861b-6bb603ed0342" (UID: "d8ef6321-c907-4111-861b-6bb603ed0342"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 00:17:45 crc kubenswrapper[4798]: I0203 00:17:45.020601 4798 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d8ef6321-c907-4111-861b-6bb603ed0342-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 03 00:17:45 crc kubenswrapper[4798]: I0203 00:17:45.057775 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8ef6321-c907-4111-861b-6bb603ed0342-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d8ef6321-c907-4111-861b-6bb603ed0342" (UID: "d8ef6321-c907-4111-861b-6bb603ed0342"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:17:45 crc kubenswrapper[4798]: I0203 00:17:45.122125 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d8ef6321-c907-4111-861b-6bb603ed0342-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 00:17:45 crc kubenswrapper[4798]: I0203 00:17:45.448519 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-ckdgn" Feb 03 00:17:45 crc kubenswrapper[4798]: I0203 00:17:45.557619 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d8ef6321-c907-4111-861b-6bb603ed0342","Type":"ContainerDied","Data":"41775c646e086d4a261afa59b7ebda5498b8ecbf0d62d76e4071e01e3562615e"} Feb 03 00:17:45 crc kubenswrapper[4798]: I0203 00:17:45.557678 4798 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41775c646e086d4a261afa59b7ebda5498b8ecbf0d62d76e4071e01e3562615e" Feb 03 00:17:45 crc kubenswrapper[4798]: I0203 00:17:45.557620 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 00:17:45 crc kubenswrapper[4798]: I0203 00:17:45.560398 4798 generic.go:334] "Generic (PLEG): container finished" podID="e2455bee-e1d1-4ced-95fc-f8deb60db02b" containerID="800fe24b83abc23f3fb134de39ae78c410545195bfb62679fd70811f408d37ae" exitCode=0 Feb 03 00:17:45 crc kubenswrapper[4798]: I0203 00:17:45.560440 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e2455bee-e1d1-4ced-95fc-f8deb60db02b","Type":"ContainerDied","Data":"800fe24b83abc23f3fb134de39ae78c410545195bfb62679fd70811f408d37ae"} Feb 03 00:17:49 crc kubenswrapper[4798]: I0203 00:17:49.627380 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-mqqj9" Feb 03 00:17:49 crc kubenswrapper[4798]: I0203 00:17:49.636410 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-mqqj9" Feb 03 00:17:49 crc kubenswrapper[4798]: I0203 00:17:49.868231 4798 patch_prober.go:28] interesting pod/downloads-7954f5f757-k8pqq container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 03 00:17:49 crc kubenswrapper[4798]: I0203 00:17:49.868615 4798 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-k8pqq" podUID="e80fb785-eead-4ebd-9e0c-0f5c548c257c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 03 00:17:49 crc kubenswrapper[4798]: I0203 00:17:49.868240 4798 patch_prober.go:28] interesting pod/downloads-7954f5f757-k8pqq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 03 00:17:49 crc kubenswrapper[4798]: I0203 00:17:49.868720 4798 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k8pqq" podUID="e80fb785-eead-4ebd-9e0c-0f5c548c257c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 03 00:17:51 crc kubenswrapper[4798]: I0203 00:17:51.625348 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/039e204d-4d36-471e-990f-4eb5b4a193fc-metrics-certs\") pod \"network-metrics-daemon-hzk9m\" (UID: \"039e204d-4d36-471e-990f-4eb5b4a193fc\") " pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:17:51 crc kubenswrapper[4798]: I0203 00:17:51.630590 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/039e204d-4d36-471e-990f-4eb5b4a193fc-metrics-certs\") pod \"network-metrics-daemon-hzk9m\" (UID: \"039e204d-4d36-471e-990f-4eb5b4a193fc\") " pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:17:51 crc kubenswrapper[4798]: I0203 00:17:51.925238 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hzk9m" Feb 03 00:17:54 crc kubenswrapper[4798]: I0203 00:17:54.119988 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 00:17:54 crc kubenswrapper[4798]: I0203 00:17:54.263901 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2455bee-e1d1-4ced-95fc-f8deb60db02b-kube-api-access\") pod \"e2455bee-e1d1-4ced-95fc-f8deb60db02b\" (UID: \"e2455bee-e1d1-4ced-95fc-f8deb60db02b\") " Feb 03 00:17:54 crc kubenswrapper[4798]: I0203 00:17:54.263965 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2455bee-e1d1-4ced-95fc-f8deb60db02b-kubelet-dir\") pod \"e2455bee-e1d1-4ced-95fc-f8deb60db02b\" (UID: \"e2455bee-e1d1-4ced-95fc-f8deb60db02b\") " Feb 03 00:17:54 crc kubenswrapper[4798]: I0203 00:17:54.264248 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2455bee-e1d1-4ced-95fc-f8deb60db02b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e2455bee-e1d1-4ced-95fc-f8deb60db02b" (UID: "e2455bee-e1d1-4ced-95fc-f8deb60db02b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 00:17:54 crc kubenswrapper[4798]: I0203 00:17:54.264328 4798 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2455bee-e1d1-4ced-95fc-f8deb60db02b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 03 00:17:54 crc kubenswrapper[4798]: I0203 00:17:54.268441 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2455bee-e1d1-4ced-95fc-f8deb60db02b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e2455bee-e1d1-4ced-95fc-f8deb60db02b" (UID: "e2455bee-e1d1-4ced-95fc-f8deb60db02b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:17:54 crc kubenswrapper[4798]: I0203 00:17:54.367106 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2455bee-e1d1-4ced-95fc-f8deb60db02b-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 00:17:54 crc kubenswrapper[4798]: I0203 00:17:54.617088 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e2455bee-e1d1-4ced-95fc-f8deb60db02b","Type":"ContainerDied","Data":"a4e8ed5fceb276875cf17e09a8b174fdac7f0d0e36b9263c9655c888f9a745aa"} Feb 03 00:17:54 crc kubenswrapper[4798]: I0203 00:17:54.617131 4798 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4e8ed5fceb276875cf17e09a8b174fdac7f0d0e36b9263c9655c888f9a745aa" Feb 03 00:17:54 crc kubenswrapper[4798]: I0203 00:17:54.617191 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 00:17:57 crc kubenswrapper[4798]: I0203 00:17:57.461125 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:17:59 crc kubenswrapper[4798]: I0203 00:17:59.874439 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-k8pqq" Feb 03 00:18:06 crc kubenswrapper[4798]: I0203 00:18:06.152930 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 00:18:10 crc kubenswrapper[4798]: I0203 00:18:10.432814 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9v9rv" Feb 03 00:18:12 crc kubenswrapper[4798]: E0203 00:18:12.852440 4798 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 03 00:18:12 crc kubenswrapper[4798]: E0203 00:18:12.852937 4798 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m29fj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9k8gx_openshift-marketplace(3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 00:18:12 crc kubenswrapper[4798]: E0203 00:18:12.854204 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-9k8gx" podUID="3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d" Feb 03 00:18:13 crc kubenswrapper[4798]: I0203 00:18:13.867078 4798 patch_prober.go:28] interesting pod/machine-config-daemon-b842j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 00:18:13 crc kubenswrapper[4798]: I0203 00:18:13.867155 4798 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b842j" podUID="c6602c86-f236-4772-b70f-a8b4847b95dd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 00:18:17 crc kubenswrapper[4798]: E0203 00:18:17.677363 4798 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 03 00:18:17 crc kubenswrapper[4798]: E0203 00:18:17.677949 4798 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tj27d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-bh68n_openshift-marketplace(3bf63648-6aaf-4a86-82a3-454f08b2973c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 00:18:17 crc kubenswrapper[4798]: E0203 00:18:17.679841 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-bh68n" podUID="3bf63648-6aaf-4a86-82a3-454f08b2973c" Feb 03 00:18:18 crc kubenswrapper[4798]: E0203 00:18:18.020229 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9k8gx" podUID="3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d" Feb 03 00:18:18 crc kubenswrapper[4798]: E0203 00:18:18.020276 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-bh68n" podUID="3bf63648-6aaf-4a86-82a3-454f08b2973c" Feb 03 00:18:18 crc kubenswrapper[4798]: E0203 00:18:18.138774 4798 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 03 00:18:18 crc kubenswrapper[4798]: E0203 00:18:18.138963 4798 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dp8r2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-b9n2p_openshift-marketplace(0f52ff0d-ea7f-4dba-80e3-98c88d78adbf): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 00:18:18 crc kubenswrapper[4798]: E0203 00:18:18.140274 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-b9n2p" podUID="0f52ff0d-ea7f-4dba-80e3-98c88d78adbf" Feb 03 00:18:18 crc kubenswrapper[4798]: I0203 00:18:18.749143 4798 generic.go:334] "Generic (PLEG): container finished" podID="ef7f31fe-b09b-4388-a076-1a287c202292" containerID="23be4d8cbb0c6bd61a93f82865274d81bcf0e6073cc39acc7dc8afa2be409b03" exitCode=0 Feb 03 00:18:18 crc kubenswrapper[4798]: I0203 00:18:18.749446 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29501280-nhgw5" event={"ID":"ef7f31fe-b09b-4388-a076-1a287c202292","Type":"ContainerDied","Data":"23be4d8cbb0c6bd61a93f82865274d81bcf0e6073cc39acc7dc8afa2be409b03"} Feb 03 00:18:20 crc kubenswrapper[4798]: I0203 00:18:20.266921 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 03 00:18:20 crc kubenswrapper[4798]: E0203 00:18:20.267442 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8ef6321-c907-4111-861b-6bb603ed0342" containerName="pruner" Feb 03 00:18:20 crc kubenswrapper[4798]: I0203 00:18:20.267459 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8ef6321-c907-4111-861b-6bb603ed0342" containerName="pruner" Feb 03 00:18:20 crc kubenswrapper[4798]: E0203 00:18:20.267473 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2455bee-e1d1-4ced-95fc-f8deb60db02b" containerName="pruner" Feb 03 00:18:20 crc kubenswrapper[4798]: I0203 00:18:20.267481 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2455bee-e1d1-4ced-95fc-f8deb60db02b" containerName="pruner" Feb 03 00:18:20 crc kubenswrapper[4798]: I0203 00:18:20.267594 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2455bee-e1d1-4ced-95fc-f8deb60db02b" containerName="pruner" Feb 03 00:18:20 crc kubenswrapper[4798]: I0203 00:18:20.267603 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8ef6321-c907-4111-861b-6bb603ed0342" containerName="pruner" Feb 03 00:18:20 crc kubenswrapper[4798]: I0203 00:18:20.270082 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 00:18:20 crc kubenswrapper[4798]: I0203 00:18:20.276055 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 03 00:18:20 crc kubenswrapper[4798]: I0203 00:18:20.276231 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 03 00:18:20 crc kubenswrapper[4798]: I0203 00:18:20.282552 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 03 00:18:20 crc kubenswrapper[4798]: I0203 00:18:20.351855 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c0acb5a-7675-442f-b5ec-6d308a25c027-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1c0acb5a-7675-442f-b5ec-6d308a25c027\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 00:18:20 crc kubenswrapper[4798]: I0203 00:18:20.351910 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c0acb5a-7675-442f-b5ec-6d308a25c027-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1c0acb5a-7675-442f-b5ec-6d308a25c027\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 00:18:20 crc kubenswrapper[4798]: I0203 00:18:20.454313 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c0acb5a-7675-442f-b5ec-6d308a25c027-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1c0acb5a-7675-442f-b5ec-6d308a25c027\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 00:18:20 crc kubenswrapper[4798]: I0203 00:18:20.454422 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c0acb5a-7675-442f-b5ec-6d308a25c027-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1c0acb5a-7675-442f-b5ec-6d308a25c027\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 00:18:20 crc kubenswrapper[4798]: I0203 00:18:20.455287 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c0acb5a-7675-442f-b5ec-6d308a25c027-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1c0acb5a-7675-442f-b5ec-6d308a25c027\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 00:18:20 crc kubenswrapper[4798]: I0203 00:18:20.479965 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c0acb5a-7675-442f-b5ec-6d308a25c027-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1c0acb5a-7675-442f-b5ec-6d308a25c027\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 00:18:20 crc kubenswrapper[4798]: I0203 00:18:20.598241 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 00:18:21 crc kubenswrapper[4798]: E0203 00:18:21.509124 4798 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 03 00:18:21 crc kubenswrapper[4798]: E0203 00:18:21.509567 4798 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gn2vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-kx75m_openshift-marketplace(e128d005-b5e0-4da0-b122-3f69a1751d1f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 00:18:21 crc kubenswrapper[4798]: E0203 00:18:21.512966 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-kx75m" podUID="e128d005-b5e0-4da0-b122-3f69a1751d1f" Feb 03 00:18:21 crc kubenswrapper[4798]: E0203 00:18:21.987690 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-kx75m" podUID="e128d005-b5e0-4da0-b122-3f69a1751d1f" Feb 03 00:18:21 crc kubenswrapper[4798]: E0203 00:18:21.987628 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-b9n2p" podUID="0f52ff0d-ea7f-4dba-80e3-98c88d78adbf" Feb 03 00:18:24 crc kubenswrapper[4798]: E0203 00:18:24.517666 4798 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 03 00:18:24 crc kubenswrapper[4798]: E0203 00:18:24.517853 4798 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d5b8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-j4fw8_openshift-marketplace(61992978-85f4-4395-b65a-d5efe47c79d8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 00:18:24 crc kubenswrapper[4798]: E0203 00:18:24.518966 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-j4fw8" podUID="61992978-85f4-4395-b65a-d5efe47c79d8" Feb 03 00:18:25 crc kubenswrapper[4798]: I0203 00:18:25.860592 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 03 00:18:25 crc kubenswrapper[4798]: I0203 00:18:25.862191 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 03 00:18:25 crc kubenswrapper[4798]: I0203 00:18:25.879339 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 03 00:18:25 crc kubenswrapper[4798]: I0203 00:18:25.940415 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/06b08551-eba7-488a-9123-3ae00e7a3d77-kubelet-dir\") pod \"installer-9-crc\" (UID: \"06b08551-eba7-488a-9123-3ae00e7a3d77\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 00:18:25 crc kubenswrapper[4798]: I0203 00:18:25.940552 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06b08551-eba7-488a-9123-3ae00e7a3d77-kube-api-access\") pod \"installer-9-crc\" (UID: \"06b08551-eba7-488a-9123-3ae00e7a3d77\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 00:18:25 crc kubenswrapper[4798]: I0203 00:18:25.940751 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/06b08551-eba7-488a-9123-3ae00e7a3d77-var-lock\") pod \"installer-9-crc\" (UID: \"06b08551-eba7-488a-9123-3ae00e7a3d77\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 00:18:26 crc kubenswrapper[4798]: I0203 00:18:26.042097 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06b08551-eba7-488a-9123-3ae00e7a3d77-kube-api-access\") pod \"installer-9-crc\" (UID: \"06b08551-eba7-488a-9123-3ae00e7a3d77\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 00:18:26 crc kubenswrapper[4798]: I0203 00:18:26.042194 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/06b08551-eba7-488a-9123-3ae00e7a3d77-var-lock\") pod \"installer-9-crc\" (UID: \"06b08551-eba7-488a-9123-3ae00e7a3d77\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 00:18:26 crc kubenswrapper[4798]: I0203 00:18:26.042284 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/06b08551-eba7-488a-9123-3ae00e7a3d77-kubelet-dir\") pod \"installer-9-crc\" (UID: \"06b08551-eba7-488a-9123-3ae00e7a3d77\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 00:18:26 crc kubenswrapper[4798]: I0203 00:18:26.042426 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/06b08551-eba7-488a-9123-3ae00e7a3d77-var-lock\") pod \"installer-9-crc\" (UID: \"06b08551-eba7-488a-9123-3ae00e7a3d77\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 00:18:26 crc kubenswrapper[4798]: I0203 00:18:26.042518 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/06b08551-eba7-488a-9123-3ae00e7a3d77-kubelet-dir\") pod \"installer-9-crc\" (UID: \"06b08551-eba7-488a-9123-3ae00e7a3d77\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 00:18:26 crc kubenswrapper[4798]: I0203 00:18:26.067102 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06b08551-eba7-488a-9123-3ae00e7a3d77-kube-api-access\") pod \"installer-9-crc\" (UID: \"06b08551-eba7-488a-9123-3ae00e7a3d77\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 00:18:26 crc kubenswrapper[4798]: E0203 00:18:26.167576 4798 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 03 00:18:26 crc kubenswrapper[4798]: E0203 00:18:26.167778 4798 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4pxmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-xs4wp_openshift-marketplace(3d6b722e-bce7-4023-b30e-f9a460adb50c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 00:18:26 crc kubenswrapper[4798]: E0203 00:18:26.169050 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-xs4wp" podUID="3d6b722e-bce7-4023-b30e-f9a460adb50c" Feb 03 00:18:26 crc kubenswrapper[4798]: I0203 00:18:26.196570 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 03 00:18:28 crc kubenswrapper[4798]: E0203 00:18:28.188630 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-j4fw8" podUID="61992978-85f4-4395-b65a-d5efe47c79d8" Feb 03 00:18:28 crc kubenswrapper[4798]: E0203 00:18:28.189157 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-xs4wp" podUID="3d6b722e-bce7-4023-b30e-f9a460adb50c" Feb 03 00:18:28 crc kubenswrapper[4798]: E0203 00:18:28.214026 4798 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 03 00:18:28 crc kubenswrapper[4798]: E0203 00:18:28.214611 4798 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cn479,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-k99wq_openshift-marketplace(9a1ac91d-6785-452c-8361-cc26d6ff6235): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 00:18:28 crc kubenswrapper[4798]: E0203 00:18:28.217812 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-k99wq" podUID="9a1ac91d-6785-452c-8361-cc26d6ff6235" Feb 03 00:18:28 crc kubenswrapper[4798]: I0203 00:18:28.280211 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29501280-nhgw5" Feb 03 00:18:28 crc kubenswrapper[4798]: I0203 00:18:28.378493 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ef7f31fe-b09b-4388-a076-1a287c202292-serviceca\") pod \"ef7f31fe-b09b-4388-a076-1a287c202292\" (UID: \"ef7f31fe-b09b-4388-a076-1a287c202292\") " Feb 03 00:18:28 crc kubenswrapper[4798]: I0203 00:18:28.378575 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fd8vc\" (UniqueName: \"kubernetes.io/projected/ef7f31fe-b09b-4388-a076-1a287c202292-kube-api-access-fd8vc\") pod \"ef7f31fe-b09b-4388-a076-1a287c202292\" (UID: \"ef7f31fe-b09b-4388-a076-1a287c202292\") " Feb 03 00:18:28 crc kubenswrapper[4798]: I0203 00:18:28.379507 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef7f31fe-b09b-4388-a076-1a287c202292-serviceca" (OuterVolumeSpecName: "serviceca") pod "ef7f31fe-b09b-4388-a076-1a287c202292" (UID: "ef7f31fe-b09b-4388-a076-1a287c202292"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:18:28 crc kubenswrapper[4798]: I0203 00:18:28.379718 4798 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ef7f31fe-b09b-4388-a076-1a287c202292-serviceca\") on node \"crc\" DevicePath \"\"" Feb 03 00:18:28 crc kubenswrapper[4798]: I0203 00:18:28.385845 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef7f31fe-b09b-4388-a076-1a287c202292-kube-api-access-fd8vc" (OuterVolumeSpecName: "kube-api-access-fd8vc") pod "ef7f31fe-b09b-4388-a076-1a287c202292" (UID: "ef7f31fe-b09b-4388-a076-1a287c202292"). InnerVolumeSpecName "kube-api-access-fd8vc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:18:28 crc kubenswrapper[4798]: I0203 00:18:28.480797 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fd8vc\" (UniqueName: \"kubernetes.io/projected/ef7f31fe-b09b-4388-a076-1a287c202292-kube-api-access-fd8vc\") on node \"crc\" DevicePath \"\"" Feb 03 00:18:28 crc kubenswrapper[4798]: I0203 00:18:28.548849 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 03 00:18:28 crc kubenswrapper[4798]: W0203 00:18:28.556933 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod1c0acb5a_7675_442f_b5ec_6d308a25c027.slice/crio-309dfbd8afd7e6e543dd318d6b753c84eb14bd90ad425c91c0d4df4fd1b89f1b WatchSource:0}: Error finding container 309dfbd8afd7e6e543dd318d6b753c84eb14bd90ad425c91c0d4df4fd1b89f1b: Status 404 returned error can't find the container with id 309dfbd8afd7e6e543dd318d6b753c84eb14bd90ad425c91c0d4df4fd1b89f1b Feb 03 00:18:28 crc kubenswrapper[4798]: I0203 00:18:28.672478 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 03 00:18:28 crc kubenswrapper[4798]: I0203 00:18:28.675601 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-hzk9m"] Feb 03 00:18:28 crc kubenswrapper[4798]: W0203 00:18:28.678794 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod039e204d_4d36_471e_990f_4eb5b4a193fc.slice/crio-ffd4981a70c80af15b78688de956580e820fbbfb8d05c0a0b8b5053c5449ddbf WatchSource:0}: Error finding container ffd4981a70c80af15b78688de956580e820fbbfb8d05c0a0b8b5053c5449ddbf: Status 404 returned error can't find the container with id ffd4981a70c80af15b78688de956580e820fbbfb8d05c0a0b8b5053c5449ddbf Feb 03 00:18:28 crc kubenswrapper[4798]: E0203 00:18:28.752921 4798 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 03 00:18:28 crc kubenswrapper[4798]: E0203 00:18:28.753101 4798 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zzj6f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-lp6cb_openshift-marketplace(1fafb100-14c8-437f-b5ac-4264b4cbef55): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 00:18:28 crc kubenswrapper[4798]: E0203 00:18:28.754301 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-lp6cb" podUID="1fafb100-14c8-437f-b5ac-4264b4cbef55" Feb 03 00:18:28 crc kubenswrapper[4798]: I0203 00:18:28.807082 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29501280-nhgw5" Feb 03 00:18:28 crc kubenswrapper[4798]: I0203 00:18:28.807199 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29501280-nhgw5" event={"ID":"ef7f31fe-b09b-4388-a076-1a287c202292","Type":"ContainerDied","Data":"a1f6aea604c50a0037d3180ba7baf54dbbe9d31330a0088bef59c433e2b2b619"} Feb 03 00:18:28 crc kubenswrapper[4798]: I0203 00:18:28.807246 4798 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1f6aea604c50a0037d3180ba7baf54dbbe9d31330a0088bef59c433e2b2b619" Feb 03 00:18:28 crc kubenswrapper[4798]: I0203 00:18:28.809199 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"06b08551-eba7-488a-9123-3ae00e7a3d77","Type":"ContainerStarted","Data":"f27296d3e757e9428d7742e409f3674a0499dff26022437a1b62adcaf4092af3"} Feb 03 00:18:28 crc kubenswrapper[4798]: I0203 00:18:28.810537 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hzk9m" event={"ID":"039e204d-4d36-471e-990f-4eb5b4a193fc","Type":"ContainerStarted","Data":"ffd4981a70c80af15b78688de956580e820fbbfb8d05c0a0b8b5053c5449ddbf"} Feb 03 00:18:28 crc kubenswrapper[4798]: I0203 00:18:28.811942 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"1c0acb5a-7675-442f-b5ec-6d308a25c027","Type":"ContainerStarted","Data":"309dfbd8afd7e6e543dd318d6b753c84eb14bd90ad425c91c0d4df4fd1b89f1b"} Feb 03 00:18:28 crc kubenswrapper[4798]: E0203 00:18:28.813742 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-lp6cb" podUID="1fafb100-14c8-437f-b5ac-4264b4cbef55" Feb 03 00:18:28 crc kubenswrapper[4798]: E0203 00:18:28.813812 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-k99wq" podUID="9a1ac91d-6785-452c-8361-cc26d6ff6235" Feb 03 00:18:30 crc kubenswrapper[4798]: I0203 00:18:30.828739 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"06b08551-eba7-488a-9123-3ae00e7a3d77","Type":"ContainerStarted","Data":"02ed672201cf03fa7b5a4b230f058beaa3b8efa68b95d75495104a73e4c5c497"} Feb 03 00:18:31 crc kubenswrapper[4798]: I0203 00:18:31.840716 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"1c0acb5a-7675-442f-b5ec-6d308a25c027","Type":"ContainerStarted","Data":"baede55ea7ebf24f2519968fb6696a61da27d03c9defef4a28e6bcd65575bfff"} Feb 03 00:18:31 crc kubenswrapper[4798]: I0203 00:18:31.842734 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hzk9m" event={"ID":"039e204d-4d36-471e-990f-4eb5b4a193fc","Type":"ContainerStarted","Data":"244b12ac3a8e8ea8bbae9d277c345165c24d7382dcaba0fa6b40f5ae13f5e3e2"} Feb 03 00:18:31 crc kubenswrapper[4798]: I0203 00:18:31.859145 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=6.859128504 podStartE2EDuration="6.859128504s" podCreationTimestamp="2026-02-03 00:18:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:18:31.855183484 +0000 UTC m=+203.621173495" watchObservedRunningTime="2026-02-03 00:18:31.859128504 +0000 UTC m=+203.625118515" Feb 03 00:18:33 crc kubenswrapper[4798]: I0203 00:18:33.855893 4798 generic.go:334] "Generic (PLEG): container finished" podID="1c0acb5a-7675-442f-b5ec-6d308a25c027" containerID="baede55ea7ebf24f2519968fb6696a61da27d03c9defef4a28e6bcd65575bfff" exitCode=0 Feb 03 00:18:33 crc kubenswrapper[4798]: I0203 00:18:33.856455 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"1c0acb5a-7675-442f-b5ec-6d308a25c027","Type":"ContainerDied","Data":"baede55ea7ebf24f2519968fb6696a61da27d03c9defef4a28e6bcd65575bfff"} Feb 03 00:18:33 crc kubenswrapper[4798]: I0203 00:18:33.859222 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hzk9m" event={"ID":"039e204d-4d36-471e-990f-4eb5b4a193fc","Type":"ContainerStarted","Data":"34bb448905d843893dc3aa78292dd408cbc95eb2d2557267a032fcdf5a135527"} Feb 03 00:18:34 crc kubenswrapper[4798]: I0203 00:18:34.880923 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-hzk9m" podStartSLOduration=185.880906541 podStartE2EDuration="3m5.880906541s" podCreationTimestamp="2026-02-03 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:18:34.878219827 +0000 UTC m=+206.644209838" watchObservedRunningTime="2026-02-03 00:18:34.880906541 +0000 UTC m=+206.646896542" Feb 03 00:18:36 crc kubenswrapper[4798]: I0203 00:18:36.223741 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 00:18:36 crc kubenswrapper[4798]: I0203 00:18:36.289012 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c0acb5a-7675-442f-b5ec-6d308a25c027-kube-api-access\") pod \"1c0acb5a-7675-442f-b5ec-6d308a25c027\" (UID: \"1c0acb5a-7675-442f-b5ec-6d308a25c027\") " Feb 03 00:18:36 crc kubenswrapper[4798]: I0203 00:18:36.289468 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c0acb5a-7675-442f-b5ec-6d308a25c027-kubelet-dir\") pod \"1c0acb5a-7675-442f-b5ec-6d308a25c027\" (UID: \"1c0acb5a-7675-442f-b5ec-6d308a25c027\") " Feb 03 00:18:36 crc kubenswrapper[4798]: I0203 00:18:36.289786 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c0acb5a-7675-442f-b5ec-6d308a25c027-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1c0acb5a-7675-442f-b5ec-6d308a25c027" (UID: "1c0acb5a-7675-442f-b5ec-6d308a25c027"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 00:18:36 crc kubenswrapper[4798]: I0203 00:18:36.297830 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c0acb5a-7675-442f-b5ec-6d308a25c027-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1c0acb5a-7675-442f-b5ec-6d308a25c027" (UID: "1c0acb5a-7675-442f-b5ec-6d308a25c027"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:18:36 crc kubenswrapper[4798]: I0203 00:18:36.391357 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c0acb5a-7675-442f-b5ec-6d308a25c027-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 00:18:36 crc kubenswrapper[4798]: I0203 00:18:36.391390 4798 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c0acb5a-7675-442f-b5ec-6d308a25c027-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 03 00:18:36 crc kubenswrapper[4798]: I0203 00:18:36.877475 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"1c0acb5a-7675-442f-b5ec-6d308a25c027","Type":"ContainerDied","Data":"309dfbd8afd7e6e543dd318d6b753c84eb14bd90ad425c91c0d4df4fd1b89f1b"} Feb 03 00:18:36 crc kubenswrapper[4798]: I0203 00:18:36.877743 4798 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="309dfbd8afd7e6e543dd318d6b753c84eb14bd90ad425c91c0d4df4fd1b89f1b" Feb 03 00:18:36 crc kubenswrapper[4798]: I0203 00:18:36.877496 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 00:18:36 crc kubenswrapper[4798]: I0203 00:18:36.880117 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kx75m" event={"ID":"e128d005-b5e0-4da0-b122-3f69a1751d1f","Type":"ContainerStarted","Data":"10b9a574d91d2aa78d0ee2390964dd4526e29437b3621aaf3516f78a374a1c3b"} Feb 03 00:18:36 crc kubenswrapper[4798]: I0203 00:18:36.886701 4798 generic.go:334] "Generic (PLEG): container finished" podID="3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d" containerID="8f2a14fa0e983089d994c1d2e7a94bab409d8eb67517b6def94c9fe4cde980a3" exitCode=0 Feb 03 00:18:36 crc kubenswrapper[4798]: I0203 00:18:36.886740 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9k8gx" event={"ID":"3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d","Type":"ContainerDied","Data":"8f2a14fa0e983089d994c1d2e7a94bab409d8eb67517b6def94c9fe4cde980a3"} Feb 03 00:18:36 crc kubenswrapper[4798]: I0203 00:18:36.890034 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bh68n" event={"ID":"3bf63648-6aaf-4a86-82a3-454f08b2973c","Type":"ContainerStarted","Data":"4c5511ad821533d6f417ad281f42d2b26e491f211bbdb07bd49e195e8d29fc76"} Feb 03 00:18:37 crc kubenswrapper[4798]: I0203 00:18:37.897509 4798 generic.go:334] "Generic (PLEG): container finished" podID="3bf63648-6aaf-4a86-82a3-454f08b2973c" containerID="4c5511ad821533d6f417ad281f42d2b26e491f211bbdb07bd49e195e8d29fc76" exitCode=0 Feb 03 00:18:37 crc kubenswrapper[4798]: I0203 00:18:37.898032 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bh68n" event={"ID":"3bf63648-6aaf-4a86-82a3-454f08b2973c","Type":"ContainerDied","Data":"4c5511ad821533d6f417ad281f42d2b26e491f211bbdb07bd49e195e8d29fc76"} Feb 03 00:18:37 crc kubenswrapper[4798]: I0203 00:18:37.903509 4798 generic.go:334] "Generic (PLEG): container finished" podID="e128d005-b5e0-4da0-b122-3f69a1751d1f" containerID="10b9a574d91d2aa78d0ee2390964dd4526e29437b3621aaf3516f78a374a1c3b" exitCode=0 Feb 03 00:18:37 crc kubenswrapper[4798]: I0203 00:18:37.903583 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kx75m" event={"ID":"e128d005-b5e0-4da0-b122-3f69a1751d1f","Type":"ContainerDied","Data":"10b9a574d91d2aa78d0ee2390964dd4526e29437b3621aaf3516f78a374a1c3b"} Feb 03 00:18:37 crc kubenswrapper[4798]: I0203 00:18:37.908899 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9k8gx" event={"ID":"3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d","Type":"ContainerStarted","Data":"a0696853f0e764005e786824c7504b60ff7ddb48d862d9a2e237fe713d981c83"} Feb 03 00:18:37 crc kubenswrapper[4798]: I0203 00:18:37.943252 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9k8gx" podStartSLOduration=2.72393688 podStartE2EDuration="1m1.943231884s" podCreationTimestamp="2026-02-03 00:17:36 +0000 UTC" firstStartedPulling="2026-02-03 00:17:38.340603118 +0000 UTC m=+150.106593129" lastFinishedPulling="2026-02-03 00:18:37.559898132 +0000 UTC m=+209.325888133" observedRunningTime="2026-02-03 00:18:37.94276708 +0000 UTC m=+209.708757101" watchObservedRunningTime="2026-02-03 00:18:37.943231884 +0000 UTC m=+209.709221905" Feb 03 00:18:38 crc kubenswrapper[4798]: I0203 00:18:38.920475 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bh68n" event={"ID":"3bf63648-6aaf-4a86-82a3-454f08b2973c","Type":"ContainerStarted","Data":"0fe1d690f102525ced96e91e82bf4c4358347518b2b51d9e487d570108d704be"} Feb 03 00:18:38 crc kubenswrapper[4798]: I0203 00:18:38.924096 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kx75m" event={"ID":"e128d005-b5e0-4da0-b122-3f69a1751d1f","Type":"ContainerStarted","Data":"925ea9be00f1cdd010832a1f2f16bb598a6e8f4fbe4a4ca6d7b63632189477c9"} Feb 03 00:18:38 crc kubenswrapper[4798]: I0203 00:18:38.926585 4798 generic.go:334] "Generic (PLEG): container finished" podID="0f52ff0d-ea7f-4dba-80e3-98c88d78adbf" containerID="52f259938a7f0a06724805baafdbfdb2bb31efb27f3f76a68a341ce03312e440" exitCode=0 Feb 03 00:18:38 crc kubenswrapper[4798]: I0203 00:18:38.926621 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b9n2p" event={"ID":"0f52ff0d-ea7f-4dba-80e3-98c88d78adbf","Type":"ContainerDied","Data":"52f259938a7f0a06724805baafdbfdb2bb31efb27f3f76a68a341ce03312e440"} Feb 03 00:18:38 crc kubenswrapper[4798]: I0203 00:18:38.968400 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kx75m" podStartSLOduration=3.007331415 podStartE2EDuration="1m0.968380818s" podCreationTimestamp="2026-02-03 00:17:38 +0000 UTC" firstStartedPulling="2026-02-03 00:17:40.408685808 +0000 UTC m=+152.174675819" lastFinishedPulling="2026-02-03 00:18:38.369735211 +0000 UTC m=+210.135725222" observedRunningTime="2026-02-03 00:18:38.967120123 +0000 UTC m=+210.733110134" watchObservedRunningTime="2026-02-03 00:18:38.968380818 +0000 UTC m=+210.734370829" Feb 03 00:18:39 crc kubenswrapper[4798]: I0203 00:18:39.300614 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kx75m" Feb 03 00:18:39 crc kubenswrapper[4798]: I0203 00:18:39.300988 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kx75m" Feb 03 00:18:39 crc kubenswrapper[4798]: I0203 00:18:39.935130 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b9n2p" event={"ID":"0f52ff0d-ea7f-4dba-80e3-98c88d78adbf","Type":"ContainerStarted","Data":"21e396a33089b616bd94f22ec3a50c5603ac7347bc67a3cd3b729b9d6bc8ea9a"} Feb 03 00:18:39 crc kubenswrapper[4798]: I0203 00:18:39.956818 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bh68n" podStartSLOduration=2.863236819 podStartE2EDuration="1m2.956799826s" podCreationTimestamp="2026-02-03 00:17:37 +0000 UTC" firstStartedPulling="2026-02-03 00:17:38.333115232 +0000 UTC m=+150.099105243" lastFinishedPulling="2026-02-03 00:18:38.426678249 +0000 UTC m=+210.192668250" observedRunningTime="2026-02-03 00:18:38.991134229 +0000 UTC m=+210.757124240" watchObservedRunningTime="2026-02-03 00:18:39.956799826 +0000 UTC m=+211.722789837" Feb 03 00:18:39 crc kubenswrapper[4798]: I0203 00:18:39.958206 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b9n2p" podStartSLOduration=2.8804820920000003 podStartE2EDuration="1m0.958200244s" podCreationTimestamp="2026-02-03 00:17:39 +0000 UTC" firstStartedPulling="2026-02-03 00:17:41.464970649 +0000 UTC m=+153.230960660" lastFinishedPulling="2026-02-03 00:18:39.542688801 +0000 UTC m=+211.308678812" observedRunningTime="2026-02-03 00:18:39.957887175 +0000 UTC m=+211.723877176" watchObservedRunningTime="2026-02-03 00:18:39.958200244 +0000 UTC m=+211.724190255" Feb 03 00:18:40 crc kubenswrapper[4798]: I0203 00:18:40.570996 4798 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-kx75m" podUID="e128d005-b5e0-4da0-b122-3f69a1751d1f" containerName="registry-server" probeResult="failure" output=< Feb 03 00:18:40 crc kubenswrapper[4798]: timeout: failed to connect service ":50051" within 1s Feb 03 00:18:40 crc kubenswrapper[4798]: > Feb 03 00:18:40 crc kubenswrapper[4798]: I0203 00:18:40.941136 4798 generic.go:334] "Generic (PLEG): container finished" podID="61992978-85f4-4395-b65a-d5efe47c79d8" containerID="945f287f32b651f6703ede87a33f4add9789d56f9d9de217a8b54c6372957867" exitCode=0 Feb 03 00:18:40 crc kubenswrapper[4798]: I0203 00:18:40.941186 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4fw8" event={"ID":"61992978-85f4-4395-b65a-d5efe47c79d8","Type":"ContainerDied","Data":"945f287f32b651f6703ede87a33f4add9789d56f9d9de217a8b54c6372957867"} Feb 03 00:18:42 crc kubenswrapper[4798]: I0203 00:18:42.955143 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4fw8" event={"ID":"61992978-85f4-4395-b65a-d5efe47c79d8","Type":"ContainerStarted","Data":"84b76f12475ef340e1341b6bc128c16c887f1e53c05a3271f53f965ea98700c4"} Feb 03 00:18:43 crc kubenswrapper[4798]: I0203 00:18:43.866794 4798 patch_prober.go:28] interesting pod/machine-config-daemon-b842j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 00:18:43 crc kubenswrapper[4798]: I0203 00:18:43.867055 4798 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b842j" podUID="c6602c86-f236-4772-b70f-a8b4847b95dd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 00:18:43 crc kubenswrapper[4798]: I0203 00:18:43.867153 4798 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b842j" Feb 03 00:18:43 crc kubenswrapper[4798]: I0203 00:18:43.867889 4798 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"13a062edfbf637aa7eed0473b03b10011ed3e0b8647ca701c9b67bd16abe6fd4"} pod="openshift-machine-config-operator/machine-config-daemon-b842j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 00:18:43 crc kubenswrapper[4798]: I0203 00:18:43.868048 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b842j" podUID="c6602c86-f236-4772-b70f-a8b4847b95dd" containerName="machine-config-daemon" containerID="cri-o://13a062edfbf637aa7eed0473b03b10011ed3e0b8647ca701c9b67bd16abe6fd4" gracePeriod=600 Feb 03 00:18:44 crc kubenswrapper[4798]: I0203 00:18:44.992320 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-j4fw8" podStartSLOduration=5.8860542559999995 podStartE2EDuration="1m8.992297219s" podCreationTimestamp="2026-02-03 00:17:36 +0000 UTC" firstStartedPulling="2026-02-03 00:17:38.333845891 +0000 UTC m=+150.099835892" lastFinishedPulling="2026-02-03 00:18:41.440088844 +0000 UTC m=+213.206078855" observedRunningTime="2026-02-03 00:18:44.988157604 +0000 UTC m=+216.754147625" watchObservedRunningTime="2026-02-03 00:18:44.992297219 +0000 UTC m=+216.758287250" Feb 03 00:18:46 crc kubenswrapper[4798]: I0203 00:18:46.902627 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j4fw8" Feb 03 00:18:46 crc kubenswrapper[4798]: I0203 00:18:46.902905 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j4fw8" Feb 03 00:18:46 crc kubenswrapper[4798]: I0203 00:18:46.979575 4798 generic.go:334] "Generic (PLEG): container finished" podID="c6602c86-f236-4772-b70f-a8b4847b95dd" containerID="13a062edfbf637aa7eed0473b03b10011ed3e0b8647ca701c9b67bd16abe6fd4" exitCode=0 Feb 03 00:18:46 crc kubenswrapper[4798]: I0203 00:18:46.979854 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" event={"ID":"c6602c86-f236-4772-b70f-a8b4847b95dd","Type":"ContainerDied","Data":"13a062edfbf637aa7eed0473b03b10011ed3e0b8647ca701c9b67bd16abe6fd4"} Feb 03 00:18:46 crc kubenswrapper[4798]: I0203 00:18:46.997715 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j4fw8" Feb 03 00:18:47 crc kubenswrapper[4798]: I0203 00:18:47.122697 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9k8gx" Feb 03 00:18:47 crc kubenswrapper[4798]: I0203 00:18:47.122744 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9k8gx" Feb 03 00:18:47 crc kubenswrapper[4798]: I0203 00:18:47.160692 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9k8gx" Feb 03 00:18:47 crc kubenswrapper[4798]: I0203 00:18:47.559606 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bh68n" Feb 03 00:18:47 crc kubenswrapper[4798]: I0203 00:18:47.560138 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bh68n" Feb 03 00:18:47 crc kubenswrapper[4798]: I0203 00:18:47.609024 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bh68n" Feb 03 00:18:48 crc kubenswrapper[4798]: I0203 00:18:48.025400 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9k8gx" Feb 03 00:18:48 crc kubenswrapper[4798]: I0203 00:18:48.032336 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bh68n" Feb 03 00:18:48 crc kubenswrapper[4798]: I0203 00:18:48.995180 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" event={"ID":"c6602c86-f236-4772-b70f-a8b4847b95dd","Type":"ContainerStarted","Data":"e0abb4fba18cc39f2fad2bd7c8dc346fe38eaaf9c13b0911c25b8caac9f640e9"} Feb 03 00:18:49 crc kubenswrapper[4798]: I0203 00:18:49.356062 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kx75m" Feb 03 00:18:49 crc kubenswrapper[4798]: I0203 00:18:49.411813 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kx75m" Feb 03 00:18:49 crc kubenswrapper[4798]: I0203 00:18:49.508502 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bh68n"] Feb 03 00:18:49 crc kubenswrapper[4798]: I0203 00:18:49.711605 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b9n2p" Feb 03 00:18:49 crc kubenswrapper[4798]: I0203 00:18:49.711682 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b9n2p" Feb 03 00:18:49 crc kubenswrapper[4798]: I0203 00:18:49.766238 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b9n2p" Feb 03 00:18:50 crc kubenswrapper[4798]: I0203 00:18:50.005510 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bh68n" podUID="3bf63648-6aaf-4a86-82a3-454f08b2973c" containerName="registry-server" containerID="cri-o://0fe1d690f102525ced96e91e82bf4c4358347518b2b51d9e487d570108d704be" gracePeriod=2 Feb 03 00:18:50 crc kubenswrapper[4798]: I0203 00:18:50.046429 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b9n2p" Feb 03 00:18:50 crc kubenswrapper[4798]: E0203 00:18:50.969677 4798 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bf63648_6aaf_4a86_82a3_454f08b2973c.slice/crio-conmon-0fe1d690f102525ced96e91e82bf4c4358347518b2b51d9e487d570108d704be.scope\": RecentStats: unable to find data in memory cache]" Feb 03 00:18:51 crc kubenswrapper[4798]: I0203 00:18:51.011802 4798 generic.go:334] "Generic (PLEG): container finished" podID="3bf63648-6aaf-4a86-82a3-454f08b2973c" containerID="0fe1d690f102525ced96e91e82bf4c4358347518b2b51d9e487d570108d704be" exitCode=0 Feb 03 00:18:51 crc kubenswrapper[4798]: I0203 00:18:51.011874 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bh68n" event={"ID":"3bf63648-6aaf-4a86-82a3-454f08b2973c","Type":"ContainerDied","Data":"0fe1d690f102525ced96e91e82bf4c4358347518b2b51d9e487d570108d704be"} Feb 03 00:18:51 crc kubenswrapper[4798]: I0203 00:18:51.626743 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bh68n" Feb 03 00:18:51 crc kubenswrapper[4798]: I0203 00:18:51.716133 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf63648-6aaf-4a86-82a3-454f08b2973c-utilities\") pod \"3bf63648-6aaf-4a86-82a3-454f08b2973c\" (UID: \"3bf63648-6aaf-4a86-82a3-454f08b2973c\") " Feb 03 00:18:51 crc kubenswrapper[4798]: I0203 00:18:51.716189 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf63648-6aaf-4a86-82a3-454f08b2973c-catalog-content\") pod \"3bf63648-6aaf-4a86-82a3-454f08b2973c\" (UID: \"3bf63648-6aaf-4a86-82a3-454f08b2973c\") " Feb 03 00:18:51 crc kubenswrapper[4798]: I0203 00:18:51.716238 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tj27d\" (UniqueName: \"kubernetes.io/projected/3bf63648-6aaf-4a86-82a3-454f08b2973c-kube-api-access-tj27d\") pod \"3bf63648-6aaf-4a86-82a3-454f08b2973c\" (UID: \"3bf63648-6aaf-4a86-82a3-454f08b2973c\") " Feb 03 00:18:51 crc kubenswrapper[4798]: I0203 00:18:51.717356 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bf63648-6aaf-4a86-82a3-454f08b2973c-utilities" (OuterVolumeSpecName: "utilities") pod "3bf63648-6aaf-4a86-82a3-454f08b2973c" (UID: "3bf63648-6aaf-4a86-82a3-454f08b2973c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:18:51 crc kubenswrapper[4798]: I0203 00:18:51.722824 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bf63648-6aaf-4a86-82a3-454f08b2973c-kube-api-access-tj27d" (OuterVolumeSpecName: "kube-api-access-tj27d") pod "3bf63648-6aaf-4a86-82a3-454f08b2973c" (UID: "3bf63648-6aaf-4a86-82a3-454f08b2973c"). InnerVolumeSpecName "kube-api-access-tj27d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:18:51 crc kubenswrapper[4798]: I0203 00:18:51.799482 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bf63648-6aaf-4a86-82a3-454f08b2973c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3bf63648-6aaf-4a86-82a3-454f08b2973c" (UID: "3bf63648-6aaf-4a86-82a3-454f08b2973c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:18:51 crc kubenswrapper[4798]: I0203 00:18:51.818389 4798 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf63648-6aaf-4a86-82a3-454f08b2973c-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 00:18:51 crc kubenswrapper[4798]: I0203 00:18:51.818425 4798 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf63648-6aaf-4a86-82a3-454f08b2973c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 00:18:51 crc kubenswrapper[4798]: I0203 00:18:51.818441 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tj27d\" (UniqueName: \"kubernetes.io/projected/3bf63648-6aaf-4a86-82a3-454f08b2973c-kube-api-access-tj27d\") on node \"crc\" DevicePath \"\"" Feb 03 00:18:51 crc kubenswrapper[4798]: I0203 00:18:51.916919 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b9n2p"] Feb 03 00:18:52 crc kubenswrapper[4798]: I0203 00:18:52.020021 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bh68n" event={"ID":"3bf63648-6aaf-4a86-82a3-454f08b2973c","Type":"ContainerDied","Data":"23960e6591c6b6bf8faa4be79e88e7046df3ca810c6f531420e2923800a45f3d"} Feb 03 00:18:52 crc kubenswrapper[4798]: I0203 00:18:52.020045 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bh68n" Feb 03 00:18:52 crc kubenswrapper[4798]: I0203 00:18:52.020070 4798 scope.go:117] "RemoveContainer" containerID="0fe1d690f102525ced96e91e82bf4c4358347518b2b51d9e487d570108d704be" Feb 03 00:18:52 crc kubenswrapper[4798]: I0203 00:18:52.023004 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lp6cb" event={"ID":"1fafb100-14c8-437f-b5ac-4264b4cbef55","Type":"ContainerStarted","Data":"4963794837af9af016441c53fabdfcfb88b2e28ad8ce20a66a1f2e0f7732c41d"} Feb 03 00:18:52 crc kubenswrapper[4798]: I0203 00:18:52.025068 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k99wq" event={"ID":"9a1ac91d-6785-452c-8361-cc26d6ff6235","Type":"ContainerStarted","Data":"d659723d67263baf1fe7f334fa9cebc754d5326bcd2b0f38f393254a806c6d4b"} Feb 03 00:18:52 crc kubenswrapper[4798]: I0203 00:18:52.028000 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b9n2p" podUID="0f52ff0d-ea7f-4dba-80e3-98c88d78adbf" containerName="registry-server" containerID="cri-o://21e396a33089b616bd94f22ec3a50c5603ac7347bc67a3cd3b729b9d6bc8ea9a" gracePeriod=2 Feb 03 00:18:52 crc kubenswrapper[4798]: I0203 00:18:52.028425 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xs4wp" event={"ID":"3d6b722e-bce7-4023-b30e-f9a460adb50c","Type":"ContainerStarted","Data":"794c3e3a05c56584074837e7720bba60d768abd3576997efce6959d5e41c1e4d"} Feb 03 00:18:52 crc kubenswrapper[4798]: I0203 00:18:52.039815 4798 scope.go:117] "RemoveContainer" containerID="4c5511ad821533d6f417ad281f42d2b26e491f211bbdb07bd49e195e8d29fc76" Feb 03 00:18:52 crc kubenswrapper[4798]: I0203 00:18:52.085598 4798 scope.go:117] "RemoveContainer" containerID="530244af6ab31fc40c4671306b698269a2d3bf162c808dfaf2d4278b96f87d26" Feb 03 00:18:52 crc kubenswrapper[4798]: I0203 00:18:52.090884 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bh68n"] Feb 03 00:18:52 crc kubenswrapper[4798]: I0203 00:18:52.094450 4798 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bh68n"] Feb 03 00:18:52 crc kubenswrapper[4798]: I0203 00:18:52.918884 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bf63648-6aaf-4a86-82a3-454f08b2973c" path="/var/lib/kubelet/pods/3bf63648-6aaf-4a86-82a3-454f08b2973c/volumes" Feb 03 00:18:53 crc kubenswrapper[4798]: I0203 00:18:53.034851 4798 generic.go:334] "Generic (PLEG): container finished" podID="3d6b722e-bce7-4023-b30e-f9a460adb50c" containerID="794c3e3a05c56584074837e7720bba60d768abd3576997efce6959d5e41c1e4d" exitCode=0 Feb 03 00:18:53 crc kubenswrapper[4798]: I0203 00:18:53.034936 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xs4wp" event={"ID":"3d6b722e-bce7-4023-b30e-f9a460adb50c","Type":"ContainerDied","Data":"794c3e3a05c56584074837e7720bba60d768abd3576997efce6959d5e41c1e4d"} Feb 03 00:18:53 crc kubenswrapper[4798]: I0203 00:18:53.041025 4798 generic.go:334] "Generic (PLEG): container finished" podID="1fafb100-14c8-437f-b5ac-4264b4cbef55" containerID="4963794837af9af016441c53fabdfcfb88b2e28ad8ce20a66a1f2e0f7732c41d" exitCode=0 Feb 03 00:18:53 crc kubenswrapper[4798]: I0203 00:18:53.041101 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lp6cb" event={"ID":"1fafb100-14c8-437f-b5ac-4264b4cbef55","Type":"ContainerDied","Data":"4963794837af9af016441c53fabdfcfb88b2e28ad8ce20a66a1f2e0f7732c41d"} Feb 03 00:18:53 crc kubenswrapper[4798]: I0203 00:18:53.051780 4798 generic.go:334] "Generic (PLEG): container finished" podID="9a1ac91d-6785-452c-8361-cc26d6ff6235" containerID="d659723d67263baf1fe7f334fa9cebc754d5326bcd2b0f38f393254a806c6d4b" exitCode=0 Feb 03 00:18:53 crc kubenswrapper[4798]: I0203 00:18:53.051841 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k99wq" event={"ID":"9a1ac91d-6785-452c-8361-cc26d6ff6235","Type":"ContainerDied","Data":"d659723d67263baf1fe7f334fa9cebc754d5326bcd2b0f38f393254a806c6d4b"} Feb 03 00:18:54 crc kubenswrapper[4798]: I0203 00:18:54.009222 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b9n2p" Feb 03 00:18:54 crc kubenswrapper[4798]: I0203 00:18:54.046317 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f52ff0d-ea7f-4dba-80e3-98c88d78adbf-catalog-content\") pod \"0f52ff0d-ea7f-4dba-80e3-98c88d78adbf\" (UID: \"0f52ff0d-ea7f-4dba-80e3-98c88d78adbf\") " Feb 03 00:18:54 crc kubenswrapper[4798]: I0203 00:18:54.046404 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dp8r2\" (UniqueName: \"kubernetes.io/projected/0f52ff0d-ea7f-4dba-80e3-98c88d78adbf-kube-api-access-dp8r2\") pod \"0f52ff0d-ea7f-4dba-80e3-98c88d78adbf\" (UID: \"0f52ff0d-ea7f-4dba-80e3-98c88d78adbf\") " Feb 03 00:18:54 crc kubenswrapper[4798]: I0203 00:18:54.046512 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f52ff0d-ea7f-4dba-80e3-98c88d78adbf-utilities\") pod \"0f52ff0d-ea7f-4dba-80e3-98c88d78adbf\" (UID: \"0f52ff0d-ea7f-4dba-80e3-98c88d78adbf\") " Feb 03 00:18:54 crc kubenswrapper[4798]: I0203 00:18:54.047639 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f52ff0d-ea7f-4dba-80e3-98c88d78adbf-utilities" (OuterVolumeSpecName: "utilities") pod "0f52ff0d-ea7f-4dba-80e3-98c88d78adbf" (UID: "0f52ff0d-ea7f-4dba-80e3-98c88d78adbf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:18:54 crc kubenswrapper[4798]: I0203 00:18:54.058946 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f52ff0d-ea7f-4dba-80e3-98c88d78adbf-kube-api-access-dp8r2" (OuterVolumeSpecName: "kube-api-access-dp8r2") pod "0f52ff0d-ea7f-4dba-80e3-98c88d78adbf" (UID: "0f52ff0d-ea7f-4dba-80e3-98c88d78adbf"). InnerVolumeSpecName "kube-api-access-dp8r2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:18:54 crc kubenswrapper[4798]: I0203 00:18:54.065624 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lp6cb" event={"ID":"1fafb100-14c8-437f-b5ac-4264b4cbef55","Type":"ContainerStarted","Data":"619f1a8e0967331f4c0822d14aa6f0d3a32795f39c92782e3fc64a33ca642eee"} Feb 03 00:18:54 crc kubenswrapper[4798]: I0203 00:18:54.070869 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k99wq" event={"ID":"9a1ac91d-6785-452c-8361-cc26d6ff6235","Type":"ContainerStarted","Data":"7146e2c9b5147c751bb816bd8f70939d53897a49007168b3051dcd6806540a54"} Feb 03 00:18:54 crc kubenswrapper[4798]: I0203 00:18:54.073091 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f52ff0d-ea7f-4dba-80e3-98c88d78adbf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0f52ff0d-ea7f-4dba-80e3-98c88d78adbf" (UID: "0f52ff0d-ea7f-4dba-80e3-98c88d78adbf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:18:54 crc kubenswrapper[4798]: I0203 00:18:54.083625 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xs4wp" event={"ID":"3d6b722e-bce7-4023-b30e-f9a460adb50c","Type":"ContainerStarted","Data":"dd3beef16af736f9973bc9a0abe24c497ec91672d4f2c4c2704a1f1816c29de8"} Feb 03 00:18:54 crc kubenswrapper[4798]: I0203 00:18:54.086928 4798 generic.go:334] "Generic (PLEG): container finished" podID="0f52ff0d-ea7f-4dba-80e3-98c88d78adbf" containerID="21e396a33089b616bd94f22ec3a50c5603ac7347bc67a3cd3b729b9d6bc8ea9a" exitCode=0 Feb 03 00:18:54 crc kubenswrapper[4798]: I0203 00:18:54.086970 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b9n2p" event={"ID":"0f52ff0d-ea7f-4dba-80e3-98c88d78adbf","Type":"ContainerDied","Data":"21e396a33089b616bd94f22ec3a50c5603ac7347bc67a3cd3b729b9d6bc8ea9a"} Feb 03 00:18:54 crc kubenswrapper[4798]: I0203 00:18:54.086995 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b9n2p" event={"ID":"0f52ff0d-ea7f-4dba-80e3-98c88d78adbf","Type":"ContainerDied","Data":"7bf5660ff22b7fc12f00f4cf039322c2d1ef7dc99ffd4dc57f488ed700c5555d"} Feb 03 00:18:54 crc kubenswrapper[4798]: I0203 00:18:54.087019 4798 scope.go:117] "RemoveContainer" containerID="21e396a33089b616bd94f22ec3a50c5603ac7347bc67a3cd3b729b9d6bc8ea9a" Feb 03 00:18:54 crc kubenswrapper[4798]: I0203 00:18:54.087175 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b9n2p" Feb 03 00:18:54 crc kubenswrapper[4798]: I0203 00:18:54.091240 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k99wq" podStartSLOduration=3.950180161 podStartE2EDuration="1m15.091220462s" podCreationTimestamp="2026-02-03 00:17:39 +0000 UTC" firstStartedPulling="2026-02-03 00:17:42.515355663 +0000 UTC m=+154.281345674" lastFinishedPulling="2026-02-03 00:18:53.656395954 +0000 UTC m=+225.422385975" observedRunningTime="2026-02-03 00:18:54.09007292 +0000 UTC m=+225.856062941" watchObservedRunningTime="2026-02-03 00:18:54.091220462 +0000 UTC m=+225.857210473" Feb 03 00:18:54 crc kubenswrapper[4798]: I0203 00:18:54.111640 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xs4wp" podStartSLOduration=2.729431994 podStartE2EDuration="1m18.111627117s" podCreationTimestamp="2026-02-03 00:17:36 +0000 UTC" firstStartedPulling="2026-02-03 00:17:38.339673225 +0000 UTC m=+150.105663236" lastFinishedPulling="2026-02-03 00:18:53.721868348 +0000 UTC m=+225.487858359" observedRunningTime="2026-02-03 00:18:54.106044843 +0000 UTC m=+225.872034854" watchObservedRunningTime="2026-02-03 00:18:54.111627117 +0000 UTC m=+225.877617128" Feb 03 00:18:54 crc kubenswrapper[4798]: I0203 00:18:54.116960 4798 scope.go:117] "RemoveContainer" containerID="52f259938a7f0a06724805baafdbfdb2bb31efb27f3f76a68a341ce03312e440" Feb 03 00:18:54 crc kubenswrapper[4798]: I0203 00:18:54.128502 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b9n2p"] Feb 03 00:18:54 crc kubenswrapper[4798]: I0203 00:18:54.131248 4798 scope.go:117] "RemoveContainer" containerID="3d3a6e9073c5adf2b5279cd1766eac54967c7a17ace647b0e4b2ef315257a5bc" Feb 03 00:18:54 crc kubenswrapper[4798]: I0203 00:18:54.137340 4798 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b9n2p"] Feb 03 00:18:54 crc kubenswrapper[4798]: I0203 00:18:54.147815 4798 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f52ff0d-ea7f-4dba-80e3-98c88d78adbf-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 00:18:54 crc kubenswrapper[4798]: I0203 00:18:54.147848 4798 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f52ff0d-ea7f-4dba-80e3-98c88d78adbf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 00:18:54 crc kubenswrapper[4798]: I0203 00:18:54.147860 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dp8r2\" (UniqueName: \"kubernetes.io/projected/0f52ff0d-ea7f-4dba-80e3-98c88d78adbf-kube-api-access-dp8r2\") on node \"crc\" DevicePath \"\"" Feb 03 00:18:54 crc kubenswrapper[4798]: I0203 00:18:54.148728 4798 scope.go:117] "RemoveContainer" containerID="21e396a33089b616bd94f22ec3a50c5603ac7347bc67a3cd3b729b9d6bc8ea9a" Feb 03 00:18:54 crc kubenswrapper[4798]: E0203 00:18:54.149311 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21e396a33089b616bd94f22ec3a50c5603ac7347bc67a3cd3b729b9d6bc8ea9a\": container with ID starting with 21e396a33089b616bd94f22ec3a50c5603ac7347bc67a3cd3b729b9d6bc8ea9a not found: ID does not exist" containerID="21e396a33089b616bd94f22ec3a50c5603ac7347bc67a3cd3b729b9d6bc8ea9a" Feb 03 00:18:54 crc kubenswrapper[4798]: I0203 00:18:54.149344 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21e396a33089b616bd94f22ec3a50c5603ac7347bc67a3cd3b729b9d6bc8ea9a"} err="failed to get container status \"21e396a33089b616bd94f22ec3a50c5603ac7347bc67a3cd3b729b9d6bc8ea9a\": rpc error: code = NotFound desc = could not find container \"21e396a33089b616bd94f22ec3a50c5603ac7347bc67a3cd3b729b9d6bc8ea9a\": container with ID starting with 21e396a33089b616bd94f22ec3a50c5603ac7347bc67a3cd3b729b9d6bc8ea9a not found: ID does not exist" Feb 03 00:18:54 crc kubenswrapper[4798]: I0203 00:18:54.149369 4798 scope.go:117] "RemoveContainer" containerID="52f259938a7f0a06724805baafdbfdb2bb31efb27f3f76a68a341ce03312e440" Feb 03 00:18:54 crc kubenswrapper[4798]: E0203 00:18:54.149613 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52f259938a7f0a06724805baafdbfdb2bb31efb27f3f76a68a341ce03312e440\": container with ID starting with 52f259938a7f0a06724805baafdbfdb2bb31efb27f3f76a68a341ce03312e440 not found: ID does not exist" containerID="52f259938a7f0a06724805baafdbfdb2bb31efb27f3f76a68a341ce03312e440" Feb 03 00:18:54 crc kubenswrapper[4798]: I0203 00:18:54.149635 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52f259938a7f0a06724805baafdbfdb2bb31efb27f3f76a68a341ce03312e440"} err="failed to get container status \"52f259938a7f0a06724805baafdbfdb2bb31efb27f3f76a68a341ce03312e440\": rpc error: code = NotFound desc = could not find container \"52f259938a7f0a06724805baafdbfdb2bb31efb27f3f76a68a341ce03312e440\": container with ID starting with 52f259938a7f0a06724805baafdbfdb2bb31efb27f3f76a68a341ce03312e440 not found: ID does not exist" Feb 03 00:18:54 crc kubenswrapper[4798]: I0203 00:18:54.149690 4798 scope.go:117] "RemoveContainer" containerID="3d3a6e9073c5adf2b5279cd1766eac54967c7a17ace647b0e4b2ef315257a5bc" Feb 03 00:18:54 crc kubenswrapper[4798]: E0203 00:18:54.149911 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d3a6e9073c5adf2b5279cd1766eac54967c7a17ace647b0e4b2ef315257a5bc\": container with ID starting with 3d3a6e9073c5adf2b5279cd1766eac54967c7a17ace647b0e4b2ef315257a5bc not found: ID does not exist" containerID="3d3a6e9073c5adf2b5279cd1766eac54967c7a17ace647b0e4b2ef315257a5bc" Feb 03 00:18:54 crc kubenswrapper[4798]: I0203 00:18:54.149934 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d3a6e9073c5adf2b5279cd1766eac54967c7a17ace647b0e4b2ef315257a5bc"} err="failed to get container status \"3d3a6e9073c5adf2b5279cd1766eac54967c7a17ace647b0e4b2ef315257a5bc\": rpc error: code = NotFound desc = could not find container \"3d3a6e9073c5adf2b5279cd1766eac54967c7a17ace647b0e4b2ef315257a5bc\": container with ID starting with 3d3a6e9073c5adf2b5279cd1766eac54967c7a17ace647b0e4b2ef315257a5bc not found: ID does not exist" Feb 03 00:18:54 crc kubenswrapper[4798]: I0203 00:18:54.914992 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f52ff0d-ea7f-4dba-80e3-98c88d78adbf" path="/var/lib/kubelet/pods/0f52ff0d-ea7f-4dba-80e3-98c88d78adbf/volumes" Feb 03 00:18:55 crc kubenswrapper[4798]: I0203 00:18:55.111948 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lp6cb" podStartSLOduration=3.719032901 podStartE2EDuration="1m16.111927774s" podCreationTimestamp="2026-02-03 00:17:39 +0000 UTC" firstStartedPulling="2026-02-03 00:17:41.46702278 +0000 UTC m=+153.233012791" lastFinishedPulling="2026-02-03 00:18:53.859917653 +0000 UTC m=+225.625907664" observedRunningTime="2026-02-03 00:18:55.1089145 +0000 UTC m=+226.874904511" watchObservedRunningTime="2026-02-03 00:18:55.111927774 +0000 UTC m=+226.877917785" Feb 03 00:18:56 crc kubenswrapper[4798]: I0203 00:18:56.947194 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j4fw8" Feb 03 00:18:57 crc kubenswrapper[4798]: I0203 00:18:57.291487 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xs4wp" Feb 03 00:18:57 crc kubenswrapper[4798]: I0203 00:18:57.291532 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xs4wp" Feb 03 00:18:57 crc kubenswrapper[4798]: I0203 00:18:57.327127 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xs4wp" Feb 03 00:18:58 crc kubenswrapper[4798]: I0203 00:18:58.175646 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xs4wp" Feb 03 00:18:58 crc kubenswrapper[4798]: I0203 00:18:58.803089 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7m7t5"] Feb 03 00:19:00 crc kubenswrapper[4798]: I0203 00:19:00.148848 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lp6cb" Feb 03 00:19:00 crc kubenswrapper[4798]: I0203 00:19:00.149578 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lp6cb" Feb 03 00:19:00 crc kubenswrapper[4798]: I0203 00:19:00.185920 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lp6cb" Feb 03 00:19:00 crc kubenswrapper[4798]: I0203 00:19:00.350918 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k99wq" Feb 03 00:19:00 crc kubenswrapper[4798]: I0203 00:19:00.351167 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k99wq" Feb 03 00:19:00 crc kubenswrapper[4798]: I0203 00:19:00.394686 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k99wq" Feb 03 00:19:01 crc kubenswrapper[4798]: I0203 00:19:01.193232 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lp6cb" Feb 03 00:19:01 crc kubenswrapper[4798]: I0203 00:19:01.197751 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k99wq" Feb 03 00:19:02 crc kubenswrapper[4798]: I0203 00:19:02.311408 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xs4wp"] Feb 03 00:19:02 crc kubenswrapper[4798]: I0203 00:19:02.311921 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xs4wp" podUID="3d6b722e-bce7-4023-b30e-f9a460adb50c" containerName="registry-server" containerID="cri-o://dd3beef16af736f9973bc9a0abe24c497ec91672d4f2c4c2704a1f1816c29de8" gracePeriod=2 Feb 03 00:19:02 crc kubenswrapper[4798]: I0203 00:19:02.666433 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xs4wp" Feb 03 00:19:02 crc kubenswrapper[4798]: I0203 00:19:02.787107 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d6b722e-bce7-4023-b30e-f9a460adb50c-utilities\") pod \"3d6b722e-bce7-4023-b30e-f9a460adb50c\" (UID: \"3d6b722e-bce7-4023-b30e-f9a460adb50c\") " Feb 03 00:19:02 crc kubenswrapper[4798]: I0203 00:19:02.787302 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d6b722e-bce7-4023-b30e-f9a460adb50c-catalog-content\") pod \"3d6b722e-bce7-4023-b30e-f9a460adb50c\" (UID: \"3d6b722e-bce7-4023-b30e-f9a460adb50c\") " Feb 03 00:19:02 crc kubenswrapper[4798]: I0203 00:19:02.787475 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pxmk\" (UniqueName: \"kubernetes.io/projected/3d6b722e-bce7-4023-b30e-f9a460adb50c-kube-api-access-4pxmk\") pod \"3d6b722e-bce7-4023-b30e-f9a460adb50c\" (UID: \"3d6b722e-bce7-4023-b30e-f9a460adb50c\") " Feb 03 00:19:02 crc kubenswrapper[4798]: I0203 00:19:02.788812 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d6b722e-bce7-4023-b30e-f9a460adb50c-utilities" (OuterVolumeSpecName: "utilities") pod "3d6b722e-bce7-4023-b30e-f9a460adb50c" (UID: "3d6b722e-bce7-4023-b30e-f9a460adb50c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:19:02 crc kubenswrapper[4798]: I0203 00:19:02.797405 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d6b722e-bce7-4023-b30e-f9a460adb50c-kube-api-access-4pxmk" (OuterVolumeSpecName: "kube-api-access-4pxmk") pod "3d6b722e-bce7-4023-b30e-f9a460adb50c" (UID: "3d6b722e-bce7-4023-b30e-f9a460adb50c"). InnerVolumeSpecName "kube-api-access-4pxmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:19:02 crc kubenswrapper[4798]: I0203 00:19:02.839798 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d6b722e-bce7-4023-b30e-f9a460adb50c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d6b722e-bce7-4023-b30e-f9a460adb50c" (UID: "3d6b722e-bce7-4023-b30e-f9a460adb50c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:19:02 crc kubenswrapper[4798]: I0203 00:19:02.888601 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pxmk\" (UniqueName: \"kubernetes.io/projected/3d6b722e-bce7-4023-b30e-f9a460adb50c-kube-api-access-4pxmk\") on node \"crc\" DevicePath \"\"" Feb 03 00:19:02 crc kubenswrapper[4798]: I0203 00:19:02.888633 4798 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d6b722e-bce7-4023-b30e-f9a460adb50c-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 00:19:02 crc kubenswrapper[4798]: I0203 00:19:02.888644 4798 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d6b722e-bce7-4023-b30e-f9a460adb50c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 00:19:02 crc kubenswrapper[4798]: I0203 00:19:02.915556 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k99wq"] Feb 03 00:19:03 crc kubenswrapper[4798]: I0203 00:19:03.170439 4798 generic.go:334] "Generic (PLEG): container finished" podID="3d6b722e-bce7-4023-b30e-f9a460adb50c" containerID="dd3beef16af736f9973bc9a0abe24c497ec91672d4f2c4c2704a1f1816c29de8" exitCode=0 Feb 03 00:19:03 crc kubenswrapper[4798]: I0203 00:19:03.170510 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xs4wp" event={"ID":"3d6b722e-bce7-4023-b30e-f9a460adb50c","Type":"ContainerDied","Data":"dd3beef16af736f9973bc9a0abe24c497ec91672d4f2c4c2704a1f1816c29de8"} Feb 03 00:19:03 crc kubenswrapper[4798]: I0203 00:19:03.170555 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xs4wp" event={"ID":"3d6b722e-bce7-4023-b30e-f9a460adb50c","Type":"ContainerDied","Data":"ab754a55c3fb967e8a64575d6f778a2fdb3403b7d468ae51705aa82e3e00c21f"} Feb 03 00:19:03 crc kubenswrapper[4798]: I0203 00:19:03.170575 4798 scope.go:117] "RemoveContainer" containerID="dd3beef16af736f9973bc9a0abe24c497ec91672d4f2c4c2704a1f1816c29de8" Feb 03 00:19:03 crc kubenswrapper[4798]: I0203 00:19:03.170857 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k99wq" podUID="9a1ac91d-6785-452c-8361-cc26d6ff6235" containerName="registry-server" containerID="cri-o://7146e2c9b5147c751bb816bd8f70939d53897a49007168b3051dcd6806540a54" gracePeriod=2 Feb 03 00:19:03 crc kubenswrapper[4798]: I0203 00:19:03.171269 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xs4wp" Feb 03 00:19:03 crc kubenswrapper[4798]: I0203 00:19:03.189264 4798 scope.go:117] "RemoveContainer" containerID="794c3e3a05c56584074837e7720bba60d768abd3576997efce6959d5e41c1e4d" Feb 03 00:19:03 crc kubenswrapper[4798]: I0203 00:19:03.191948 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xs4wp"] Feb 03 00:19:03 crc kubenswrapper[4798]: I0203 00:19:03.194859 4798 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xs4wp"] Feb 03 00:19:03 crc kubenswrapper[4798]: I0203 00:19:03.214791 4798 scope.go:117] "RemoveContainer" containerID="94582cbb2f9c7d306973d75508d40bc9ae77f0de27329fbe9a264bc525c8fa04" Feb 03 00:19:03 crc kubenswrapper[4798]: I0203 00:19:03.228530 4798 scope.go:117] "RemoveContainer" containerID="dd3beef16af736f9973bc9a0abe24c497ec91672d4f2c4c2704a1f1816c29de8" Feb 03 00:19:03 crc kubenswrapper[4798]: E0203 00:19:03.229205 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd3beef16af736f9973bc9a0abe24c497ec91672d4f2c4c2704a1f1816c29de8\": container with ID starting with dd3beef16af736f9973bc9a0abe24c497ec91672d4f2c4c2704a1f1816c29de8 not found: ID does not exist" containerID="dd3beef16af736f9973bc9a0abe24c497ec91672d4f2c4c2704a1f1816c29de8" Feb 03 00:19:03 crc kubenswrapper[4798]: I0203 00:19:03.229288 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd3beef16af736f9973bc9a0abe24c497ec91672d4f2c4c2704a1f1816c29de8"} err="failed to get container status \"dd3beef16af736f9973bc9a0abe24c497ec91672d4f2c4c2704a1f1816c29de8\": rpc error: code = NotFound desc = could not find container \"dd3beef16af736f9973bc9a0abe24c497ec91672d4f2c4c2704a1f1816c29de8\": container with ID starting with dd3beef16af736f9973bc9a0abe24c497ec91672d4f2c4c2704a1f1816c29de8 not found: ID does not exist" Feb 03 00:19:03 crc kubenswrapper[4798]: I0203 00:19:03.229343 4798 scope.go:117] "RemoveContainer" containerID="794c3e3a05c56584074837e7720bba60d768abd3576997efce6959d5e41c1e4d" Feb 03 00:19:03 crc kubenswrapper[4798]: E0203 00:19:03.229810 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"794c3e3a05c56584074837e7720bba60d768abd3576997efce6959d5e41c1e4d\": container with ID starting with 794c3e3a05c56584074837e7720bba60d768abd3576997efce6959d5e41c1e4d not found: ID does not exist" containerID="794c3e3a05c56584074837e7720bba60d768abd3576997efce6959d5e41c1e4d" Feb 03 00:19:03 crc kubenswrapper[4798]: I0203 00:19:03.229859 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"794c3e3a05c56584074837e7720bba60d768abd3576997efce6959d5e41c1e4d"} err="failed to get container status \"794c3e3a05c56584074837e7720bba60d768abd3576997efce6959d5e41c1e4d\": rpc error: code = NotFound desc = could not find container \"794c3e3a05c56584074837e7720bba60d768abd3576997efce6959d5e41c1e4d\": container with ID starting with 794c3e3a05c56584074837e7720bba60d768abd3576997efce6959d5e41c1e4d not found: ID does not exist" Feb 03 00:19:03 crc kubenswrapper[4798]: I0203 00:19:03.229886 4798 scope.go:117] "RemoveContainer" containerID="94582cbb2f9c7d306973d75508d40bc9ae77f0de27329fbe9a264bc525c8fa04" Feb 03 00:19:03 crc kubenswrapper[4798]: E0203 00:19:03.230132 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94582cbb2f9c7d306973d75508d40bc9ae77f0de27329fbe9a264bc525c8fa04\": container with ID starting with 94582cbb2f9c7d306973d75508d40bc9ae77f0de27329fbe9a264bc525c8fa04 not found: ID does not exist" containerID="94582cbb2f9c7d306973d75508d40bc9ae77f0de27329fbe9a264bc525c8fa04" Feb 03 00:19:03 crc kubenswrapper[4798]: I0203 00:19:03.230280 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94582cbb2f9c7d306973d75508d40bc9ae77f0de27329fbe9a264bc525c8fa04"} err="failed to get container status \"94582cbb2f9c7d306973d75508d40bc9ae77f0de27329fbe9a264bc525c8fa04\": rpc error: code = NotFound desc = could not find container \"94582cbb2f9c7d306973d75508d40bc9ae77f0de27329fbe9a264bc525c8fa04\": container with ID starting with 94582cbb2f9c7d306973d75508d40bc9ae77f0de27329fbe9a264bc525c8fa04 not found: ID does not exist" Feb 03 00:19:03 crc kubenswrapper[4798]: I0203 00:19:03.531141 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k99wq" Feb 03 00:19:03 crc kubenswrapper[4798]: I0203 00:19:03.596419 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a1ac91d-6785-452c-8361-cc26d6ff6235-utilities\") pod \"9a1ac91d-6785-452c-8361-cc26d6ff6235\" (UID: \"9a1ac91d-6785-452c-8361-cc26d6ff6235\") " Feb 03 00:19:03 crc kubenswrapper[4798]: I0203 00:19:03.596522 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cn479\" (UniqueName: \"kubernetes.io/projected/9a1ac91d-6785-452c-8361-cc26d6ff6235-kube-api-access-cn479\") pod \"9a1ac91d-6785-452c-8361-cc26d6ff6235\" (UID: \"9a1ac91d-6785-452c-8361-cc26d6ff6235\") " Feb 03 00:19:03 crc kubenswrapper[4798]: I0203 00:19:03.596615 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a1ac91d-6785-452c-8361-cc26d6ff6235-catalog-content\") pod \"9a1ac91d-6785-452c-8361-cc26d6ff6235\" (UID: \"9a1ac91d-6785-452c-8361-cc26d6ff6235\") " Feb 03 00:19:03 crc kubenswrapper[4798]: I0203 00:19:03.597390 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a1ac91d-6785-452c-8361-cc26d6ff6235-utilities" (OuterVolumeSpecName: "utilities") pod "9a1ac91d-6785-452c-8361-cc26d6ff6235" (UID: "9a1ac91d-6785-452c-8361-cc26d6ff6235"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:19:03 crc kubenswrapper[4798]: I0203 00:19:03.606284 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a1ac91d-6785-452c-8361-cc26d6ff6235-kube-api-access-cn479" (OuterVolumeSpecName: "kube-api-access-cn479") pod "9a1ac91d-6785-452c-8361-cc26d6ff6235" (UID: "9a1ac91d-6785-452c-8361-cc26d6ff6235"). InnerVolumeSpecName "kube-api-access-cn479". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:19:03 crc kubenswrapper[4798]: I0203 00:19:03.698884 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cn479\" (UniqueName: \"kubernetes.io/projected/9a1ac91d-6785-452c-8361-cc26d6ff6235-kube-api-access-cn479\") on node \"crc\" DevicePath \"\"" Feb 03 00:19:03 crc kubenswrapper[4798]: I0203 00:19:03.699004 4798 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a1ac91d-6785-452c-8361-cc26d6ff6235-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 00:19:03 crc kubenswrapper[4798]: I0203 00:19:03.747567 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a1ac91d-6785-452c-8361-cc26d6ff6235-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9a1ac91d-6785-452c-8361-cc26d6ff6235" (UID: "9a1ac91d-6785-452c-8361-cc26d6ff6235"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:19:03 crc kubenswrapper[4798]: I0203 00:19:03.800124 4798 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a1ac91d-6785-452c-8361-cc26d6ff6235-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 00:19:04 crc kubenswrapper[4798]: I0203 00:19:04.178550 4798 generic.go:334] "Generic (PLEG): container finished" podID="9a1ac91d-6785-452c-8361-cc26d6ff6235" containerID="7146e2c9b5147c751bb816bd8f70939d53897a49007168b3051dcd6806540a54" exitCode=0 Feb 03 00:19:04 crc kubenswrapper[4798]: I0203 00:19:04.178709 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k99wq" event={"ID":"9a1ac91d-6785-452c-8361-cc26d6ff6235","Type":"ContainerDied","Data":"7146e2c9b5147c751bb816bd8f70939d53897a49007168b3051dcd6806540a54"} Feb 03 00:19:04 crc kubenswrapper[4798]: I0203 00:19:04.178887 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k99wq" event={"ID":"9a1ac91d-6785-452c-8361-cc26d6ff6235","Type":"ContainerDied","Data":"6a21a1b212afdd4ae2bec2d93013e98425e1b24e4505e110ad93f1e155c29052"} Feb 03 00:19:04 crc kubenswrapper[4798]: I0203 00:19:04.178822 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k99wq" Feb 03 00:19:04 crc kubenswrapper[4798]: I0203 00:19:04.178911 4798 scope.go:117] "RemoveContainer" containerID="7146e2c9b5147c751bb816bd8f70939d53897a49007168b3051dcd6806540a54" Feb 03 00:19:04 crc kubenswrapper[4798]: I0203 00:19:04.200077 4798 scope.go:117] "RemoveContainer" containerID="d659723d67263baf1fe7f334fa9cebc754d5326bcd2b0f38f393254a806c6d4b" Feb 03 00:19:04 crc kubenswrapper[4798]: I0203 00:19:04.211128 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k99wq"] Feb 03 00:19:04 crc kubenswrapper[4798]: I0203 00:19:04.213491 4798 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k99wq"] Feb 03 00:19:04 crc kubenswrapper[4798]: I0203 00:19:04.226964 4798 scope.go:117] "RemoveContainer" containerID="84e71c3c16ae16ddc7797e327d7f5927025ba35a96a7cf31b391a996c123b979" Feb 03 00:19:04 crc kubenswrapper[4798]: I0203 00:19:04.239267 4798 scope.go:117] "RemoveContainer" containerID="7146e2c9b5147c751bb816bd8f70939d53897a49007168b3051dcd6806540a54" Feb 03 00:19:04 crc kubenswrapper[4798]: E0203 00:19:04.239764 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7146e2c9b5147c751bb816bd8f70939d53897a49007168b3051dcd6806540a54\": container with ID starting with 7146e2c9b5147c751bb816bd8f70939d53897a49007168b3051dcd6806540a54 not found: ID does not exist" containerID="7146e2c9b5147c751bb816bd8f70939d53897a49007168b3051dcd6806540a54" Feb 03 00:19:04 crc kubenswrapper[4798]: I0203 00:19:04.239805 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7146e2c9b5147c751bb816bd8f70939d53897a49007168b3051dcd6806540a54"} err="failed to get container status \"7146e2c9b5147c751bb816bd8f70939d53897a49007168b3051dcd6806540a54\": rpc error: code = NotFound desc = could not find container \"7146e2c9b5147c751bb816bd8f70939d53897a49007168b3051dcd6806540a54\": container with ID starting with 7146e2c9b5147c751bb816bd8f70939d53897a49007168b3051dcd6806540a54 not found: ID does not exist" Feb 03 00:19:04 crc kubenswrapper[4798]: I0203 00:19:04.239833 4798 scope.go:117] "RemoveContainer" containerID="d659723d67263baf1fe7f334fa9cebc754d5326bcd2b0f38f393254a806c6d4b" Feb 03 00:19:04 crc kubenswrapper[4798]: E0203 00:19:04.240146 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d659723d67263baf1fe7f334fa9cebc754d5326bcd2b0f38f393254a806c6d4b\": container with ID starting with d659723d67263baf1fe7f334fa9cebc754d5326bcd2b0f38f393254a806c6d4b not found: ID does not exist" containerID="d659723d67263baf1fe7f334fa9cebc754d5326bcd2b0f38f393254a806c6d4b" Feb 03 00:19:04 crc kubenswrapper[4798]: I0203 00:19:04.240176 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d659723d67263baf1fe7f334fa9cebc754d5326bcd2b0f38f393254a806c6d4b"} err="failed to get container status \"d659723d67263baf1fe7f334fa9cebc754d5326bcd2b0f38f393254a806c6d4b\": rpc error: code = NotFound desc = could not find container \"d659723d67263baf1fe7f334fa9cebc754d5326bcd2b0f38f393254a806c6d4b\": container with ID starting with d659723d67263baf1fe7f334fa9cebc754d5326bcd2b0f38f393254a806c6d4b not found: ID does not exist" Feb 03 00:19:04 crc kubenswrapper[4798]: I0203 00:19:04.240778 4798 scope.go:117] "RemoveContainer" containerID="84e71c3c16ae16ddc7797e327d7f5927025ba35a96a7cf31b391a996c123b979" Feb 03 00:19:04 crc kubenswrapper[4798]: E0203 00:19:04.241052 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84e71c3c16ae16ddc7797e327d7f5927025ba35a96a7cf31b391a996c123b979\": container with ID starting with 84e71c3c16ae16ddc7797e327d7f5927025ba35a96a7cf31b391a996c123b979 not found: ID does not exist" containerID="84e71c3c16ae16ddc7797e327d7f5927025ba35a96a7cf31b391a996c123b979" Feb 03 00:19:04 crc kubenswrapper[4798]: I0203 00:19:04.241076 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84e71c3c16ae16ddc7797e327d7f5927025ba35a96a7cf31b391a996c123b979"} err="failed to get container status \"84e71c3c16ae16ddc7797e327d7f5927025ba35a96a7cf31b391a996c123b979\": rpc error: code = NotFound desc = could not find container \"84e71c3c16ae16ddc7797e327d7f5927025ba35a96a7cf31b391a996c123b979\": container with ID starting with 84e71c3c16ae16ddc7797e327d7f5927025ba35a96a7cf31b391a996c123b979 not found: ID does not exist" Feb 03 00:19:04 crc kubenswrapper[4798]: I0203 00:19:04.915390 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d6b722e-bce7-4023-b30e-f9a460adb50c" path="/var/lib/kubelet/pods/3d6b722e-bce7-4023-b30e-f9a460adb50c/volumes" Feb 03 00:19:04 crc kubenswrapper[4798]: I0203 00:19:04.916206 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a1ac91d-6785-452c-8361-cc26d6ff6235" path="/var/lib/kubelet/pods/9a1ac91d-6785-452c-8361-cc26d6ff6235/volumes" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.421387 4798 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 03 00:19:08 crc kubenswrapper[4798]: E0203 00:19:08.421936 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f52ff0d-ea7f-4dba-80e3-98c88d78adbf" containerName="extract-utilities" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.421950 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f52ff0d-ea7f-4dba-80e3-98c88d78adbf" containerName="extract-utilities" Feb 03 00:19:08 crc kubenswrapper[4798]: E0203 00:19:08.421965 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a1ac91d-6785-452c-8361-cc26d6ff6235" containerName="extract-utilities" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.421972 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a1ac91d-6785-452c-8361-cc26d6ff6235" containerName="extract-utilities" Feb 03 00:19:08 crc kubenswrapper[4798]: E0203 00:19:08.421980 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d6b722e-bce7-4023-b30e-f9a460adb50c" containerName="extract-content" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.421987 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d6b722e-bce7-4023-b30e-f9a460adb50c" containerName="extract-content" Feb 03 00:19:08 crc kubenswrapper[4798]: E0203 00:19:08.421997 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef7f31fe-b09b-4388-a076-1a287c202292" containerName="image-pruner" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.422004 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef7f31fe-b09b-4388-a076-1a287c202292" containerName="image-pruner" Feb 03 00:19:08 crc kubenswrapper[4798]: E0203 00:19:08.422012 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c0acb5a-7675-442f-b5ec-6d308a25c027" containerName="pruner" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.422017 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c0acb5a-7675-442f-b5ec-6d308a25c027" containerName="pruner" Feb 03 00:19:08 crc kubenswrapper[4798]: E0203 00:19:08.422027 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a1ac91d-6785-452c-8361-cc26d6ff6235" containerName="registry-server" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.422033 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a1ac91d-6785-452c-8361-cc26d6ff6235" containerName="registry-server" Feb 03 00:19:08 crc kubenswrapper[4798]: E0203 00:19:08.422042 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f52ff0d-ea7f-4dba-80e3-98c88d78adbf" containerName="extract-content" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.422048 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f52ff0d-ea7f-4dba-80e3-98c88d78adbf" containerName="extract-content" Feb 03 00:19:08 crc kubenswrapper[4798]: E0203 00:19:08.422054 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d6b722e-bce7-4023-b30e-f9a460adb50c" containerName="extract-utilities" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.422060 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d6b722e-bce7-4023-b30e-f9a460adb50c" containerName="extract-utilities" Feb 03 00:19:08 crc kubenswrapper[4798]: E0203 00:19:08.422069 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a1ac91d-6785-452c-8361-cc26d6ff6235" containerName="extract-content" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.422074 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a1ac91d-6785-452c-8361-cc26d6ff6235" containerName="extract-content" Feb 03 00:19:08 crc kubenswrapper[4798]: E0203 00:19:08.422081 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bf63648-6aaf-4a86-82a3-454f08b2973c" containerName="extract-utilities" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.422087 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf63648-6aaf-4a86-82a3-454f08b2973c" containerName="extract-utilities" Feb 03 00:19:08 crc kubenswrapper[4798]: E0203 00:19:08.422097 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f52ff0d-ea7f-4dba-80e3-98c88d78adbf" containerName="registry-server" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.422103 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f52ff0d-ea7f-4dba-80e3-98c88d78adbf" containerName="registry-server" Feb 03 00:19:08 crc kubenswrapper[4798]: E0203 00:19:08.422111 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bf63648-6aaf-4a86-82a3-454f08b2973c" containerName="extract-content" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.422117 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf63648-6aaf-4a86-82a3-454f08b2973c" containerName="extract-content" Feb 03 00:19:08 crc kubenswrapper[4798]: E0203 00:19:08.422123 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bf63648-6aaf-4a86-82a3-454f08b2973c" containerName="registry-server" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.422129 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf63648-6aaf-4a86-82a3-454f08b2973c" containerName="registry-server" Feb 03 00:19:08 crc kubenswrapper[4798]: E0203 00:19:08.422155 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d6b722e-bce7-4023-b30e-f9a460adb50c" containerName="registry-server" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.422161 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d6b722e-bce7-4023-b30e-f9a460adb50c" containerName="registry-server" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.422237 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f52ff0d-ea7f-4dba-80e3-98c88d78adbf" containerName="registry-server" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.422249 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d6b722e-bce7-4023-b30e-f9a460adb50c" containerName="registry-server" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.422256 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef7f31fe-b09b-4388-a076-1a287c202292" containerName="image-pruner" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.422265 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bf63648-6aaf-4a86-82a3-454f08b2973c" containerName="registry-server" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.422280 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c0acb5a-7675-442f-b5ec-6d308a25c027" containerName="pruner" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.422289 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a1ac91d-6785-452c-8361-cc26d6ff6235" containerName="registry-server" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.422564 4798 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.422720 4798 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.424302 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.424299 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://033b416e7549939712a710931ff8eeed3b360c4a52f48ca3e003d48be6a9dde5" gracePeriod=15 Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.424516 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://b848412f3f729b0be1246c7aa6d61d8a64e93fdbdaa5899e447dd24adb39f185" gracePeriod=15 Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.424770 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://a58ca61528f9b3bf28c062c16d1b924e68f3a7bc60438abbb9e6d726023b713b" gracePeriod=15 Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.424558 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://d75e6cbd2a5c8ecdfeb505388b2c00877ab436a5434a9ffb2c15fd1a6cbde36b" gracePeriod=15 Feb 03 00:19:08 crc kubenswrapper[4798]: E0203 00:19:08.427655 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.427702 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 03 00:19:08 crc kubenswrapper[4798]: E0203 00:19:08.427726 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.427733 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 03 00:19:08 crc kubenswrapper[4798]: E0203 00:19:08.427771 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.427777 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 03 00:19:08 crc kubenswrapper[4798]: E0203 00:19:08.427791 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.427797 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 03 00:19:08 crc kubenswrapper[4798]: E0203 00:19:08.427811 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.427817 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 03 00:19:08 crc kubenswrapper[4798]: E0203 00:19:08.427830 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.427836 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.428761 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.428777 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.428793 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.428804 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.428817 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.424294 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://3a7e69941f257b4b15956eceee179fe4480d582215ae076f2757f5ead357c5a7" gracePeriod=15 Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.653617 4798 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.683941 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.753227 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.753274 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.753301 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.753329 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.753519 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.753592 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.753633 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.753688 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.855168 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.855209 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.855228 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.855258 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.855275 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.855293 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.855320 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.855326 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.855351 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.855355 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.855387 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.855392 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.855308 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.855435 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.855455 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.855502 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.910849 4798 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 03 00:19:08 crc kubenswrapper[4798]: I0203 00:19:08.982135 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 00:19:09 crc kubenswrapper[4798]: W0203 00:19:09.001076 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-f9fc2025b11ba755fabb0037d450006ef5e37bc582541973725c5373069cb15c WatchSource:0}: Error finding container f9fc2025b11ba755fabb0037d450006ef5e37bc582541973725c5373069cb15c: Status 404 returned error can't find the container with id f9fc2025b11ba755fabb0037d450006ef5e37bc582541973725c5373069cb15c Feb 03 00:19:09 crc kubenswrapper[4798]: E0203 00:19:09.004190 4798 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.150:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189094842f1c9504 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-03 00:19:09.003601156 +0000 UTC m=+240.769591167,LastTimestamp:2026-02-03 00:19:09.003601156 +0000 UTC m=+240.769591167,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 03 00:19:09 crc kubenswrapper[4798]: E0203 00:19:09.030206 4798 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 03 00:19:09 crc kubenswrapper[4798]: E0203 00:19:09.030380 4798 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 03 00:19:09 crc kubenswrapper[4798]: E0203 00:19:09.030533 4798 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 03 00:19:09 crc kubenswrapper[4798]: E0203 00:19:09.030696 4798 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 03 00:19:09 crc kubenswrapper[4798]: E0203 00:19:09.030844 4798 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 03 00:19:09 crc kubenswrapper[4798]: I0203 00:19:09.030866 4798 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 03 00:19:09 crc kubenswrapper[4798]: E0203 00:19:09.031108 4798 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="200ms" Feb 03 00:19:09 crc kubenswrapper[4798]: I0203 00:19:09.205042 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 03 00:19:09 crc kubenswrapper[4798]: I0203 00:19:09.205842 4798 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="033b416e7549939712a710931ff8eeed3b360c4a52f48ca3e003d48be6a9dde5" exitCode=0 Feb 03 00:19:09 crc kubenswrapper[4798]: I0203 00:19:09.205875 4798 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d75e6cbd2a5c8ecdfeb505388b2c00877ab436a5434a9ffb2c15fd1a6cbde36b" exitCode=2 Feb 03 00:19:09 crc kubenswrapper[4798]: I0203 00:19:09.206981 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"f9fc2025b11ba755fabb0037d450006ef5e37bc582541973725c5373069cb15c"} Feb 03 00:19:09 crc kubenswrapper[4798]: E0203 00:19:09.232020 4798 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="400ms" Feb 03 00:19:09 crc kubenswrapper[4798]: E0203 00:19:09.632975 4798 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="800ms" Feb 03 00:19:10 crc kubenswrapper[4798]: I0203 00:19:10.213082 4798 generic.go:334] "Generic (PLEG): container finished" podID="06b08551-eba7-488a-9123-3ae00e7a3d77" containerID="02ed672201cf03fa7b5a4b230f058beaa3b8efa68b95d75495104a73e4c5c497" exitCode=0 Feb 03 00:19:10 crc kubenswrapper[4798]: I0203 00:19:10.213134 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"06b08551-eba7-488a-9123-3ae00e7a3d77","Type":"ContainerDied","Data":"02ed672201cf03fa7b5a4b230f058beaa3b8efa68b95d75495104a73e4c5c497"} Feb 03 00:19:10 crc kubenswrapper[4798]: I0203 00:19:10.213701 4798 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 03 00:19:10 crc kubenswrapper[4798]: I0203 00:19:10.213868 4798 status_manager.go:851] "Failed to get status for pod" podUID="06b08551-eba7-488a-9123-3ae00e7a3d77" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 03 00:19:10 crc kubenswrapper[4798]: I0203 00:19:10.216424 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 03 00:19:10 crc kubenswrapper[4798]: I0203 00:19:10.217597 4798 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b848412f3f729b0be1246c7aa6d61d8a64e93fdbdaa5899e447dd24adb39f185" exitCode=0 Feb 03 00:19:10 crc kubenswrapper[4798]: I0203 00:19:10.217631 4798 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a58ca61528f9b3bf28c062c16d1b924e68f3a7bc60438abbb9e6d726023b713b" exitCode=0 Feb 03 00:19:10 crc kubenswrapper[4798]: I0203 00:19:10.219160 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"2d8b5a2eaa8acb2f676b2c6424957966d9cb2e1924383f389ea6961d2938e2f4"} Feb 03 00:19:10 crc kubenswrapper[4798]: I0203 00:19:10.219770 4798 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 03 00:19:10 crc kubenswrapper[4798]: I0203 00:19:10.220187 4798 status_manager.go:851] "Failed to get status for pod" podUID="06b08551-eba7-488a-9123-3ae00e7a3d77" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 03 00:19:10 crc kubenswrapper[4798]: E0203 00:19:10.433828 4798 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="1.6s" Feb 03 00:19:11 crc kubenswrapper[4798]: I0203 00:19:11.226618 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 03 00:19:11 crc kubenswrapper[4798]: I0203 00:19:11.227687 4798 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3a7e69941f257b4b15956eceee179fe4480d582215ae076f2757f5ead357c5a7" exitCode=0 Feb 03 00:19:11 crc kubenswrapper[4798]: I0203 00:19:11.422715 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 03 00:19:11 crc kubenswrapper[4798]: I0203 00:19:11.424057 4798 status_manager.go:851] "Failed to get status for pod" podUID="06b08551-eba7-488a-9123-3ae00e7a3d77" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 03 00:19:11 crc kubenswrapper[4798]: I0203 00:19:11.424397 4798 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 03 00:19:11 crc kubenswrapper[4798]: I0203 00:19:11.514212 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 03 00:19:11 crc kubenswrapper[4798]: I0203 00:19:11.515107 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 00:19:11 crc kubenswrapper[4798]: I0203 00:19:11.515619 4798 status_manager.go:851] "Failed to get status for pod" podUID="06b08551-eba7-488a-9123-3ae00e7a3d77" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 03 00:19:11 crc kubenswrapper[4798]: I0203 00:19:11.516101 4798 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 03 00:19:11 crc kubenswrapper[4798]: I0203 00:19:11.516528 4798 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 03 00:19:11 crc kubenswrapper[4798]: I0203 00:19:11.586690 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/06b08551-eba7-488a-9123-3ae00e7a3d77-kubelet-dir\") pod \"06b08551-eba7-488a-9123-3ae00e7a3d77\" (UID: \"06b08551-eba7-488a-9123-3ae00e7a3d77\") " Feb 03 00:19:11 crc kubenswrapper[4798]: I0203 00:19:11.586755 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06b08551-eba7-488a-9123-3ae00e7a3d77-kube-api-access\") pod \"06b08551-eba7-488a-9123-3ae00e7a3d77\" (UID: \"06b08551-eba7-488a-9123-3ae00e7a3d77\") " Feb 03 00:19:11 crc kubenswrapper[4798]: I0203 00:19:11.586801 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06b08551-eba7-488a-9123-3ae00e7a3d77-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "06b08551-eba7-488a-9123-3ae00e7a3d77" (UID: "06b08551-eba7-488a-9123-3ae00e7a3d77"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 00:19:11 crc kubenswrapper[4798]: I0203 00:19:11.586855 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/06b08551-eba7-488a-9123-3ae00e7a3d77-var-lock\") pod \"06b08551-eba7-488a-9123-3ae00e7a3d77\" (UID: \"06b08551-eba7-488a-9123-3ae00e7a3d77\") " Feb 03 00:19:11 crc kubenswrapper[4798]: I0203 00:19:11.586982 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06b08551-eba7-488a-9123-3ae00e7a3d77-var-lock" (OuterVolumeSpecName: "var-lock") pod "06b08551-eba7-488a-9123-3ae00e7a3d77" (UID: "06b08551-eba7-488a-9123-3ae00e7a3d77"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 00:19:11 crc kubenswrapper[4798]: I0203 00:19:11.587046 4798 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/06b08551-eba7-488a-9123-3ae00e7a3d77-var-lock\") on node \"crc\" DevicePath \"\"" Feb 03 00:19:11 crc kubenswrapper[4798]: I0203 00:19:11.587058 4798 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/06b08551-eba7-488a-9123-3ae00e7a3d77-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 03 00:19:11 crc kubenswrapper[4798]: I0203 00:19:11.592054 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06b08551-eba7-488a-9123-3ae00e7a3d77-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "06b08551-eba7-488a-9123-3ae00e7a3d77" (UID: "06b08551-eba7-488a-9123-3ae00e7a3d77"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:19:11 crc kubenswrapper[4798]: I0203 00:19:11.687670 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 03 00:19:11 crc kubenswrapper[4798]: I0203 00:19:11.687713 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 03 00:19:11 crc kubenswrapper[4798]: I0203 00:19:11.687821 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 03 00:19:11 crc kubenswrapper[4798]: I0203 00:19:11.687819 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 00:19:11 crc kubenswrapper[4798]: I0203 00:19:11.687889 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 00:19:11 crc kubenswrapper[4798]: I0203 00:19:11.687932 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 00:19:11 crc kubenswrapper[4798]: I0203 00:19:11.688076 4798 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 03 00:19:11 crc kubenswrapper[4798]: I0203 00:19:11.688094 4798 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 03 00:19:11 crc kubenswrapper[4798]: I0203 00:19:11.688106 4798 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 03 00:19:11 crc kubenswrapper[4798]: I0203 00:19:11.688118 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06b08551-eba7-488a-9123-3ae00e7a3d77-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 00:19:12 crc kubenswrapper[4798]: E0203 00:19:12.034946 4798 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="3.2s" Feb 03 00:19:12 crc kubenswrapper[4798]: I0203 00:19:12.236268 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 03 00:19:12 crc kubenswrapper[4798]: I0203 00:19:12.237181 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"06b08551-eba7-488a-9123-3ae00e7a3d77","Type":"ContainerDied","Data":"f27296d3e757e9428d7742e409f3674a0499dff26022437a1b62adcaf4092af3"} Feb 03 00:19:12 crc kubenswrapper[4798]: I0203 00:19:12.237243 4798 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f27296d3e757e9428d7742e409f3674a0499dff26022437a1b62adcaf4092af3" Feb 03 00:19:12 crc kubenswrapper[4798]: I0203 00:19:12.241204 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 03 00:19:12 crc kubenswrapper[4798]: I0203 00:19:12.241758 4798 scope.go:117] "RemoveContainer" containerID="b848412f3f729b0be1246c7aa6d61d8a64e93fdbdaa5899e447dd24adb39f185" Feb 03 00:19:12 crc kubenswrapper[4798]: I0203 00:19:12.241902 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 00:19:12 crc kubenswrapper[4798]: I0203 00:19:12.253253 4798 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 03 00:19:12 crc kubenswrapper[4798]: I0203 00:19:12.253875 4798 status_manager.go:851] "Failed to get status for pod" podUID="06b08551-eba7-488a-9123-3ae00e7a3d77" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 03 00:19:12 crc kubenswrapper[4798]: I0203 00:19:12.254152 4798 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 03 00:19:12 crc kubenswrapper[4798]: I0203 00:19:12.262394 4798 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 03 00:19:12 crc kubenswrapper[4798]: I0203 00:19:12.262593 4798 status_manager.go:851] "Failed to get status for pod" podUID="06b08551-eba7-488a-9123-3ae00e7a3d77" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 03 00:19:12 crc kubenswrapper[4798]: I0203 00:19:12.262850 4798 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 03 00:19:12 crc kubenswrapper[4798]: I0203 00:19:12.264201 4798 scope.go:117] "RemoveContainer" containerID="033b416e7549939712a710931ff8eeed3b360c4a52f48ca3e003d48be6a9dde5" Feb 03 00:19:12 crc kubenswrapper[4798]: I0203 00:19:12.280785 4798 scope.go:117] "RemoveContainer" containerID="a58ca61528f9b3bf28c062c16d1b924e68f3a7bc60438abbb9e6d726023b713b" Feb 03 00:19:12 crc kubenswrapper[4798]: I0203 00:19:12.298396 4798 scope.go:117] "RemoveContainer" containerID="d75e6cbd2a5c8ecdfeb505388b2c00877ab436a5434a9ffb2c15fd1a6cbde36b" Feb 03 00:19:12 crc kubenswrapper[4798]: I0203 00:19:12.318335 4798 scope.go:117] "RemoveContainer" containerID="3a7e69941f257b4b15956eceee179fe4480d582215ae076f2757f5ead357c5a7" Feb 03 00:19:12 crc kubenswrapper[4798]: I0203 00:19:12.337386 4798 scope.go:117] "RemoveContainer" containerID="5f76fe95c90d6d7268f51580430426afc9716ab970f2015b55f89a1498eef598" Feb 03 00:19:12 crc kubenswrapper[4798]: I0203 00:19:12.914860 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 03 00:19:15 crc kubenswrapper[4798]: E0203 00:19:15.235863 4798 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="6.4s" Feb 03 00:19:15 crc kubenswrapper[4798]: E0203 00:19:15.529075 4798 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.150:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189094842f1c9504 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-03 00:19:09.003601156 +0000 UTC m=+240.769591167,LastTimestamp:2026-02-03 00:19:09.003601156 +0000 UTC m=+240.769591167,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 03 00:19:18 crc kubenswrapper[4798]: I0203 00:19:18.913790 4798 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 03 00:19:18 crc kubenswrapper[4798]: I0203 00:19:18.914260 4798 status_manager.go:851] "Failed to get status for pod" podUID="06b08551-eba7-488a-9123-3ae00e7a3d77" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 03 00:19:21 crc kubenswrapper[4798]: E0203 00:19:21.637737 4798 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="7s" Feb 03 00:19:21 crc kubenswrapper[4798]: I0203 00:19:21.907549 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 00:19:21 crc kubenswrapper[4798]: I0203 00:19:21.908585 4798 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 03 00:19:21 crc kubenswrapper[4798]: I0203 00:19:21.909084 4798 status_manager.go:851] "Failed to get status for pod" podUID="06b08551-eba7-488a-9123-3ae00e7a3d77" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 03 00:19:21 crc kubenswrapper[4798]: I0203 00:19:21.918866 4798 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="84295e9a-e49f-4550-8b66-c13c1d9a31ba" Feb 03 00:19:21 crc kubenswrapper[4798]: I0203 00:19:21.918902 4798 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="84295e9a-e49f-4550-8b66-c13c1d9a31ba" Feb 03 00:19:21 crc kubenswrapper[4798]: E0203 00:19:21.919317 4798 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 00:19:21 crc kubenswrapper[4798]: I0203 00:19:21.919811 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 00:19:21 crc kubenswrapper[4798]: W0203 00:19:21.937755 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-d8c2e04936477bcf569f19e0781418a892b8dc00a74d25f113a32e90e6289b3d WatchSource:0}: Error finding container d8c2e04936477bcf569f19e0781418a892b8dc00a74d25f113a32e90e6289b3d: Status 404 returned error can't find the container with id d8c2e04936477bcf569f19e0781418a892b8dc00a74d25f113a32e90e6289b3d Feb 03 00:19:22 crc kubenswrapper[4798]: I0203 00:19:22.301722 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d82989fdb5ab7276b55542f92fe65a76b8eec585351153784de1f759fc90c437"} Feb 03 00:19:22 crc kubenswrapper[4798]: I0203 00:19:22.301775 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d8c2e04936477bcf569f19e0781418a892b8dc00a74d25f113a32e90e6289b3d"} Feb 03 00:19:22 crc kubenswrapper[4798]: I0203 00:19:22.304797 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 03 00:19:22 crc kubenswrapper[4798]: I0203 00:19:22.304856 4798 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="1da94f34ed139edbb39d3e777c72192e4d6d444b191e109b4c72ccb945513a53" exitCode=1 Feb 03 00:19:22 crc kubenswrapper[4798]: I0203 00:19:22.304899 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"1da94f34ed139edbb39d3e777c72192e4d6d444b191e109b4c72ccb945513a53"} Feb 03 00:19:22 crc kubenswrapper[4798]: I0203 00:19:22.305597 4798 scope.go:117] "RemoveContainer" containerID="1da94f34ed139edbb39d3e777c72192e4d6d444b191e109b4c72ccb945513a53" Feb 03 00:19:22 crc kubenswrapper[4798]: I0203 00:19:22.305773 4798 status_manager.go:851] "Failed to get status for pod" podUID="06b08551-eba7-488a-9123-3ae00e7a3d77" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 03 00:19:22 crc kubenswrapper[4798]: I0203 00:19:22.306164 4798 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 03 00:19:22 crc kubenswrapper[4798]: I0203 00:19:22.306585 4798 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 03 00:19:23 crc kubenswrapper[4798]: I0203 00:19:23.312288 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 03 00:19:23 crc kubenswrapper[4798]: I0203 00:19:23.312368 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"49516a4e59f4000be756a5fed37645d099cd1780922423495014a40579a4307e"} Feb 03 00:19:23 crc kubenswrapper[4798]: I0203 00:19:23.313208 4798 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 03 00:19:23 crc kubenswrapper[4798]: I0203 00:19:23.313622 4798 status_manager.go:851] "Failed to get status for pod" podUID="06b08551-eba7-488a-9123-3ae00e7a3d77" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 03 00:19:23 crc kubenswrapper[4798]: I0203 00:19:23.313966 4798 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 03 00:19:23 crc kubenswrapper[4798]: I0203 00:19:23.314564 4798 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="d82989fdb5ab7276b55542f92fe65a76b8eec585351153784de1f759fc90c437" exitCode=0 Feb 03 00:19:23 crc kubenswrapper[4798]: I0203 00:19:23.314626 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"d82989fdb5ab7276b55542f92fe65a76b8eec585351153784de1f759fc90c437"} Feb 03 00:19:23 crc kubenswrapper[4798]: I0203 00:19:23.314934 4798 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="84295e9a-e49f-4550-8b66-c13c1d9a31ba" Feb 03 00:19:23 crc kubenswrapper[4798]: I0203 00:19:23.314980 4798 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="84295e9a-e49f-4550-8b66-c13c1d9a31ba" Feb 03 00:19:23 crc kubenswrapper[4798]: I0203 00:19:23.315012 4798 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 03 00:19:23 crc kubenswrapper[4798]: I0203 00:19:23.315269 4798 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 03 00:19:23 crc kubenswrapper[4798]: E0203 00:19:23.315497 4798 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 00:19:23 crc kubenswrapper[4798]: I0203 00:19:23.315535 4798 status_manager.go:851] "Failed to get status for pod" podUID="06b08551-eba7-488a-9123-3ae00e7a3d77" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 03 00:19:23 crc kubenswrapper[4798]: I0203 00:19:23.832827 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" podUID="0c4c1155-ba10-4dd4-95a8-105c5c7168eb" containerName="oauth-openshift" containerID="cri-o://fd2b98d45d9a65c60fc8d6bb9182964da7c7936c6a066ffda51c12db83b8ca68" gracePeriod=15 Feb 03 00:19:24 crc kubenswrapper[4798]: I0203 00:19:24.325429 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8de7d788ae3a0808f0d361cd9938102968f8b3c75d29e13b52fb94ff93cfb907"} Feb 03 00:19:24 crc kubenswrapper[4798]: I0203 00:19:24.936206 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:19:24 crc kubenswrapper[4798]: I0203 00:19:24.960730 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-system-service-ca\") pod \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " Feb 03 00:19:24 crc kubenswrapper[4798]: I0203 00:19:24.962056 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "0c4c1155-ba10-4dd4-95a8-105c5c7168eb" (UID: "0c4c1155-ba10-4dd4-95a8-105c5c7168eb"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.062359 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-system-router-certs\") pod \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.062718 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-system-session\") pod \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.062744 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-audit-policies\") pod \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.062773 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-audit-dir\") pod \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.062802 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-system-serving-cert\") pod \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.062822 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbpdl\" (UniqueName: \"kubernetes.io/projected/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-kube-api-access-hbpdl\") pod \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.062842 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-user-template-login\") pod \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.062865 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-user-template-provider-selection\") pod \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.062893 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-user-idp-0-file-data\") pod \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.062913 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-system-ocp-branding-template\") pod \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.063112 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-system-trusted-ca-bundle\") pod \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.063147 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-user-template-error\") pod \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.063167 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-system-cliconfig\") pod \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\" (UID: \"0c4c1155-ba10-4dd4-95a8-105c5c7168eb\") " Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.063326 4798 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.063626 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "0c4c1155-ba10-4dd4-95a8-105c5c7168eb" (UID: "0c4c1155-ba10-4dd4-95a8-105c5c7168eb"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.064039 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "0c4c1155-ba10-4dd4-95a8-105c5c7168eb" (UID: "0c4c1155-ba10-4dd4-95a8-105c5c7168eb"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.064604 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "0c4c1155-ba10-4dd4-95a8-105c5c7168eb" (UID: "0c4c1155-ba10-4dd4-95a8-105c5c7168eb"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.064720 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "0c4c1155-ba10-4dd4-95a8-105c5c7168eb" (UID: "0c4c1155-ba10-4dd4-95a8-105c5c7168eb"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.069384 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "0c4c1155-ba10-4dd4-95a8-105c5c7168eb" (UID: "0c4c1155-ba10-4dd4-95a8-105c5c7168eb"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.069999 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-kube-api-access-hbpdl" (OuterVolumeSpecName: "kube-api-access-hbpdl") pod "0c4c1155-ba10-4dd4-95a8-105c5c7168eb" (UID: "0c4c1155-ba10-4dd4-95a8-105c5c7168eb"). InnerVolumeSpecName "kube-api-access-hbpdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.071922 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "0c4c1155-ba10-4dd4-95a8-105c5c7168eb" (UID: "0c4c1155-ba10-4dd4-95a8-105c5c7168eb"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.075775 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "0c4c1155-ba10-4dd4-95a8-105c5c7168eb" (UID: "0c4c1155-ba10-4dd4-95a8-105c5c7168eb"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.076200 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "0c4c1155-ba10-4dd4-95a8-105c5c7168eb" (UID: "0c4c1155-ba10-4dd4-95a8-105c5c7168eb"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.079778 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "0c4c1155-ba10-4dd4-95a8-105c5c7168eb" (UID: "0c4c1155-ba10-4dd4-95a8-105c5c7168eb"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.080015 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "0c4c1155-ba10-4dd4-95a8-105c5c7168eb" (UID: "0c4c1155-ba10-4dd4-95a8-105c5c7168eb"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.080398 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "0c4c1155-ba10-4dd4-95a8-105c5c7168eb" (UID: "0c4c1155-ba10-4dd4-95a8-105c5c7168eb"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.080720 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "0c4c1155-ba10-4dd4-95a8-105c5c7168eb" (UID: "0c4c1155-ba10-4dd4-95a8-105c5c7168eb"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.164307 4798 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.164353 4798 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.164372 4798 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.164386 4798 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.164399 4798 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.164415 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbpdl\" (UniqueName: \"kubernetes.io/projected/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-kube-api-access-hbpdl\") on node \"crc\" DevicePath \"\"" Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.164427 4798 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.164442 4798 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.164455 4798 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.164467 4798 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.164479 4798 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.164491 4798 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.164503 4798 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0c4c1155-ba10-4dd4-95a8-105c5c7168eb-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.332006 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b30f1ef65285d49186c6553c4bfdc1cf5f4e6ace1b8e1c483bd8e54fc949246c"} Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.333272 4798 generic.go:334] "Generic (PLEG): container finished" podID="0c4c1155-ba10-4dd4-95a8-105c5c7168eb" containerID="fd2b98d45d9a65c60fc8d6bb9182964da7c7936c6a066ffda51c12db83b8ca68" exitCode=0 Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.333314 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.333316 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" event={"ID":"0c4c1155-ba10-4dd4-95a8-105c5c7168eb","Type":"ContainerDied","Data":"fd2b98d45d9a65c60fc8d6bb9182964da7c7936c6a066ffda51c12db83b8ca68"} Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.333369 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7m7t5" event={"ID":"0c4c1155-ba10-4dd4-95a8-105c5c7168eb","Type":"ContainerDied","Data":"0d7760dad2cc4819973300069c3b491210a28d4e66cc69426c3453167e5618c5"} Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.333392 4798 scope.go:117] "RemoveContainer" containerID="fd2b98d45d9a65c60fc8d6bb9182964da7c7936c6a066ffda51c12db83b8ca68" Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.354833 4798 scope.go:117] "RemoveContainer" containerID="fd2b98d45d9a65c60fc8d6bb9182964da7c7936c6a066ffda51c12db83b8ca68" Feb 03 00:19:25 crc kubenswrapper[4798]: E0203 00:19:25.355236 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd2b98d45d9a65c60fc8d6bb9182964da7c7936c6a066ffda51c12db83b8ca68\": container with ID starting with fd2b98d45d9a65c60fc8d6bb9182964da7c7936c6a066ffda51c12db83b8ca68 not found: ID does not exist" containerID="fd2b98d45d9a65c60fc8d6bb9182964da7c7936c6a066ffda51c12db83b8ca68" Feb 03 00:19:25 crc kubenswrapper[4798]: I0203 00:19:25.355277 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd2b98d45d9a65c60fc8d6bb9182964da7c7936c6a066ffda51c12db83b8ca68"} err="failed to get container status \"fd2b98d45d9a65c60fc8d6bb9182964da7c7936c6a066ffda51c12db83b8ca68\": rpc error: code = NotFound desc = could not find container \"fd2b98d45d9a65c60fc8d6bb9182964da7c7936c6a066ffda51c12db83b8ca68\": container with ID starting with fd2b98d45d9a65c60fc8d6bb9182964da7c7936c6a066ffda51c12db83b8ca68 not found: ID does not exist" Feb 03 00:19:26 crc kubenswrapper[4798]: I0203 00:19:26.352147 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b6d89f8a06af504e95b6c3006054f58acea13530486720e8cf389fcebf2cd933"} Feb 03 00:19:27 crc kubenswrapper[4798]: I0203 00:19:27.360318 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cc71bbc6a6ca55906987bf987882bdc9184bfb7c316d792636804ce23eaf818d"} Feb 03 00:19:28 crc kubenswrapper[4798]: I0203 00:19:28.370699 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9aed7eb360af8d7a7df9f0c31b29da6ea25e3b4b3094cc2130ec573a643db2b5"} Feb 03 00:19:28 crc kubenswrapper[4798]: I0203 00:19:28.371045 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 00:19:28 crc kubenswrapper[4798]: I0203 00:19:28.371112 4798 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="84295e9a-e49f-4550-8b66-c13c1d9a31ba" Feb 03 00:19:28 crc kubenswrapper[4798]: I0203 00:19:28.371128 4798 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="84295e9a-e49f-4550-8b66-c13c1d9a31ba" Feb 03 00:19:28 crc kubenswrapper[4798]: I0203 00:19:28.378088 4798 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 00:19:29 crc kubenswrapper[4798]: I0203 00:19:29.194157 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 00:19:29 crc kubenswrapper[4798]: I0203 00:19:29.194494 4798 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 03 00:19:29 crc kubenswrapper[4798]: I0203 00:19:29.194589 4798 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 03 00:19:29 crc kubenswrapper[4798]: I0203 00:19:29.375508 4798 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="84295e9a-e49f-4550-8b66-c13c1d9a31ba" Feb 03 00:19:29 crc kubenswrapper[4798]: I0203 00:19:29.375536 4798 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="84295e9a-e49f-4550-8b66-c13c1d9a31ba" Feb 03 00:19:30 crc kubenswrapper[4798]: I0203 00:19:30.365371 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 00:19:31 crc kubenswrapper[4798]: I0203 00:19:31.080008 4798 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="12a0b192-119c-4154-93b3-2db1d1ff1469" Feb 03 00:19:39 crc kubenswrapper[4798]: I0203 00:19:39.194895 4798 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 03 00:19:39 crc kubenswrapper[4798]: I0203 00:19:39.195482 4798 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 03 00:19:40 crc kubenswrapper[4798]: I0203 00:19:40.309230 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 03 00:19:40 crc kubenswrapper[4798]: I0203 00:19:40.647647 4798 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 03 00:19:41 crc kubenswrapper[4798]: I0203 00:19:41.102713 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 03 00:19:41 crc kubenswrapper[4798]: I0203 00:19:41.147366 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 03 00:19:41 crc kubenswrapper[4798]: I0203 00:19:41.194627 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 03 00:19:41 crc kubenswrapper[4798]: I0203 00:19:41.274153 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 03 00:19:41 crc kubenswrapper[4798]: I0203 00:19:41.625999 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 03 00:19:41 crc kubenswrapper[4798]: I0203 00:19:41.687190 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 03 00:19:41 crc kubenswrapper[4798]: I0203 00:19:41.853224 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 03 00:19:41 crc kubenswrapper[4798]: I0203 00:19:41.989734 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 03 00:19:42 crc kubenswrapper[4798]: I0203 00:19:42.188120 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 03 00:19:42 crc kubenswrapper[4798]: I0203 00:19:42.257276 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 03 00:19:42 crc kubenswrapper[4798]: I0203 00:19:42.291213 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 03 00:19:42 crc kubenswrapper[4798]: I0203 00:19:42.364985 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 03 00:19:42 crc kubenswrapper[4798]: I0203 00:19:42.675228 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 03 00:19:42 crc kubenswrapper[4798]: I0203 00:19:42.760307 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 03 00:19:42 crc kubenswrapper[4798]: I0203 00:19:42.829762 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 03 00:19:42 crc kubenswrapper[4798]: I0203 00:19:42.893091 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 03 00:19:42 crc kubenswrapper[4798]: I0203 00:19:42.940591 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 03 00:19:42 crc kubenswrapper[4798]: I0203 00:19:42.946183 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 03 00:19:42 crc kubenswrapper[4798]: I0203 00:19:42.983153 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 03 00:19:43 crc kubenswrapper[4798]: I0203 00:19:43.066842 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 03 00:19:43 crc kubenswrapper[4798]: I0203 00:19:43.110002 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 03 00:19:43 crc kubenswrapper[4798]: I0203 00:19:43.317466 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 03 00:19:43 crc kubenswrapper[4798]: I0203 00:19:43.883696 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 03 00:19:44 crc kubenswrapper[4798]: I0203 00:19:44.022904 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 03 00:19:44 crc kubenswrapper[4798]: I0203 00:19:44.052150 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 03 00:19:44 crc kubenswrapper[4798]: I0203 00:19:44.112899 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 03 00:19:44 crc kubenswrapper[4798]: I0203 00:19:44.215590 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 03 00:19:44 crc kubenswrapper[4798]: I0203 00:19:44.311757 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 03 00:19:44 crc kubenswrapper[4798]: I0203 00:19:44.386187 4798 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 03 00:19:44 crc kubenswrapper[4798]: I0203 00:19:44.397388 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 03 00:19:44 crc kubenswrapper[4798]: I0203 00:19:44.401462 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 03 00:19:44 crc kubenswrapper[4798]: I0203 00:19:44.410745 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 03 00:19:44 crc kubenswrapper[4798]: I0203 00:19:44.449244 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 03 00:19:44 crc kubenswrapper[4798]: I0203 00:19:44.461162 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 03 00:19:44 crc kubenswrapper[4798]: I0203 00:19:44.505424 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 03 00:19:44 crc kubenswrapper[4798]: I0203 00:19:44.539764 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 03 00:19:44 crc kubenswrapper[4798]: I0203 00:19:44.587039 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 03 00:19:44 crc kubenswrapper[4798]: I0203 00:19:44.587112 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 03 00:19:44 crc kubenswrapper[4798]: I0203 00:19:44.673619 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 03 00:19:44 crc kubenswrapper[4798]: I0203 00:19:44.721337 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 03 00:19:44 crc kubenswrapper[4798]: I0203 00:19:44.820511 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 03 00:19:44 crc kubenswrapper[4798]: I0203 00:19:44.855855 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 03 00:19:44 crc kubenswrapper[4798]: I0203 00:19:44.903896 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 03 00:19:44 crc kubenswrapper[4798]: I0203 00:19:44.941333 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 03 00:19:44 crc kubenswrapper[4798]: I0203 00:19:44.958930 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 03 00:19:44 crc kubenswrapper[4798]: I0203 00:19:44.965078 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 03 00:19:44 crc kubenswrapper[4798]: I0203 00:19:44.988328 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 03 00:19:44 crc kubenswrapper[4798]: I0203 00:19:44.989997 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 03 00:19:44 crc kubenswrapper[4798]: I0203 00:19:44.992877 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 03 00:19:45 crc kubenswrapper[4798]: I0203 00:19:45.008274 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 03 00:19:45 crc kubenswrapper[4798]: I0203 00:19:45.080275 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 03 00:19:45 crc kubenswrapper[4798]: I0203 00:19:45.087159 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 03 00:19:45 crc kubenswrapper[4798]: I0203 00:19:45.098418 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 03 00:19:45 crc kubenswrapper[4798]: I0203 00:19:45.205743 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 03 00:19:45 crc kubenswrapper[4798]: I0203 00:19:45.329193 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 03 00:19:45 crc kubenswrapper[4798]: I0203 00:19:45.356191 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 03 00:19:45 crc kubenswrapper[4798]: I0203 00:19:45.404286 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 03 00:19:45 crc kubenswrapper[4798]: I0203 00:19:45.465901 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 03 00:19:45 crc kubenswrapper[4798]: I0203 00:19:45.485080 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 03 00:19:45 crc kubenswrapper[4798]: I0203 00:19:45.514396 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 03 00:19:45 crc kubenswrapper[4798]: I0203 00:19:45.691414 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 03 00:19:45 crc kubenswrapper[4798]: I0203 00:19:45.710794 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 03 00:19:45 crc kubenswrapper[4798]: I0203 00:19:45.731980 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 03 00:19:45 crc kubenswrapper[4798]: I0203 00:19:45.827663 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 03 00:19:45 crc kubenswrapper[4798]: I0203 00:19:45.884022 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 03 00:19:45 crc kubenswrapper[4798]: I0203 00:19:45.903996 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 03 00:19:45 crc kubenswrapper[4798]: I0203 00:19:45.993176 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 03 00:19:46 crc kubenswrapper[4798]: I0203 00:19:46.068497 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 03 00:19:46 crc kubenswrapper[4798]: I0203 00:19:46.131617 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 03 00:19:46 crc kubenswrapper[4798]: I0203 00:19:46.311687 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 03 00:19:46 crc kubenswrapper[4798]: I0203 00:19:46.319955 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 03 00:19:46 crc kubenswrapper[4798]: I0203 00:19:46.321698 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 03 00:19:46 crc kubenswrapper[4798]: I0203 00:19:46.325075 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 03 00:19:46 crc kubenswrapper[4798]: I0203 00:19:46.330419 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 03 00:19:46 crc kubenswrapper[4798]: I0203 00:19:46.331030 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 03 00:19:46 crc kubenswrapper[4798]: I0203 00:19:46.390301 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 03 00:19:46 crc kubenswrapper[4798]: I0203 00:19:46.648415 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 03 00:19:46 crc kubenswrapper[4798]: I0203 00:19:46.685298 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 03 00:19:46 crc kubenswrapper[4798]: I0203 00:19:46.693009 4798 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 03 00:19:46 crc kubenswrapper[4798]: I0203 00:19:46.766148 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 03 00:19:46 crc kubenswrapper[4798]: I0203 00:19:46.774999 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 03 00:19:46 crc kubenswrapper[4798]: I0203 00:19:46.836683 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 03 00:19:46 crc kubenswrapper[4798]: I0203 00:19:46.896881 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 03 00:19:46 crc kubenswrapper[4798]: I0203 00:19:46.909708 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 03 00:19:46 crc kubenswrapper[4798]: I0203 00:19:46.933353 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 03 00:19:46 crc kubenswrapper[4798]: I0203 00:19:46.993617 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 03 00:19:47 crc kubenswrapper[4798]: I0203 00:19:47.032426 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 03 00:19:47 crc kubenswrapper[4798]: I0203 00:19:47.059543 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 03 00:19:47 crc kubenswrapper[4798]: I0203 00:19:47.084932 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 03 00:19:47 crc kubenswrapper[4798]: I0203 00:19:47.086338 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 03 00:19:47 crc kubenswrapper[4798]: I0203 00:19:47.232011 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 03 00:19:47 crc kubenswrapper[4798]: I0203 00:19:47.247644 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 03 00:19:47 crc kubenswrapper[4798]: I0203 00:19:47.284182 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 03 00:19:47 crc kubenswrapper[4798]: I0203 00:19:47.311514 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 03 00:19:47 crc kubenswrapper[4798]: I0203 00:19:47.316868 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 03 00:19:47 crc kubenswrapper[4798]: I0203 00:19:47.319733 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 03 00:19:47 crc kubenswrapper[4798]: I0203 00:19:47.354924 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 03 00:19:47 crc kubenswrapper[4798]: I0203 00:19:47.437239 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 03 00:19:47 crc kubenswrapper[4798]: I0203 00:19:47.509870 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 03 00:19:47 crc kubenswrapper[4798]: I0203 00:19:47.533542 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 03 00:19:47 crc kubenswrapper[4798]: I0203 00:19:47.590017 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 03 00:19:47 crc kubenswrapper[4798]: I0203 00:19:47.632821 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 03 00:19:47 crc kubenswrapper[4798]: I0203 00:19:47.633214 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 03 00:19:47 crc kubenswrapper[4798]: I0203 00:19:47.839821 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 03 00:19:47 crc kubenswrapper[4798]: I0203 00:19:47.921879 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 03 00:19:48 crc kubenswrapper[4798]: I0203 00:19:48.025896 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 03 00:19:48 crc kubenswrapper[4798]: I0203 00:19:48.053050 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 03 00:19:48 crc kubenswrapper[4798]: I0203 00:19:48.072831 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 03 00:19:48 crc kubenswrapper[4798]: I0203 00:19:48.143840 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 03 00:19:48 crc kubenswrapper[4798]: I0203 00:19:48.230817 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 03 00:19:48 crc kubenswrapper[4798]: I0203 00:19:48.323386 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 03 00:19:48 crc kubenswrapper[4798]: I0203 00:19:48.480001 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 03 00:19:48 crc kubenswrapper[4798]: I0203 00:19:48.530602 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 03 00:19:48 crc kubenswrapper[4798]: I0203 00:19:48.550587 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 03 00:19:48 crc kubenswrapper[4798]: I0203 00:19:48.610110 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 03 00:19:48 crc kubenswrapper[4798]: I0203 00:19:48.697435 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 03 00:19:48 crc kubenswrapper[4798]: I0203 00:19:48.744431 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 03 00:19:49 crc kubenswrapper[4798]: I0203 00:19:49.140324 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 03 00:19:49 crc kubenswrapper[4798]: I0203 00:19:49.175077 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 03 00:19:49 crc kubenswrapper[4798]: I0203 00:19:49.194445 4798 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 03 00:19:49 crc kubenswrapper[4798]: I0203 00:19:49.194496 4798 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 03 00:19:49 crc kubenswrapper[4798]: I0203 00:19:49.194553 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 00:19:49 crc kubenswrapper[4798]: I0203 00:19:49.195158 4798 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"49516a4e59f4000be756a5fed37645d099cd1780922423495014a40579a4307e"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 03 00:19:49 crc kubenswrapper[4798]: I0203 00:19:49.195256 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://49516a4e59f4000be756a5fed37645d099cd1780922423495014a40579a4307e" gracePeriod=30 Feb 03 00:19:49 crc kubenswrapper[4798]: I0203 00:19:49.278454 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 03 00:19:49 crc kubenswrapper[4798]: I0203 00:19:49.284897 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 03 00:19:49 crc kubenswrapper[4798]: I0203 00:19:49.367384 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 03 00:19:49 crc kubenswrapper[4798]: I0203 00:19:49.532561 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 03 00:19:49 crc kubenswrapper[4798]: I0203 00:19:49.691771 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 03 00:19:49 crc kubenswrapper[4798]: I0203 00:19:49.947609 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 03 00:19:50 crc kubenswrapper[4798]: I0203 00:19:50.001819 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 03 00:19:50 crc kubenswrapper[4798]: I0203 00:19:50.035693 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 03 00:19:50 crc kubenswrapper[4798]: I0203 00:19:50.075631 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 03 00:19:50 crc kubenswrapper[4798]: I0203 00:19:50.117765 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 03 00:19:50 crc kubenswrapper[4798]: I0203 00:19:50.145035 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 03 00:19:50 crc kubenswrapper[4798]: I0203 00:19:50.256187 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 03 00:19:50 crc kubenswrapper[4798]: I0203 00:19:50.277136 4798 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 03 00:19:50 crc kubenswrapper[4798]: I0203 00:19:50.412341 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 03 00:19:50 crc kubenswrapper[4798]: I0203 00:19:50.426834 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 03 00:19:50 crc kubenswrapper[4798]: I0203 00:19:50.433305 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 03 00:19:50 crc kubenswrapper[4798]: I0203 00:19:50.450694 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 03 00:19:50 crc kubenswrapper[4798]: I0203 00:19:50.463533 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 03 00:19:50 crc kubenswrapper[4798]: I0203 00:19:50.501005 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 03 00:19:50 crc kubenswrapper[4798]: I0203 00:19:50.536431 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 03 00:19:50 crc kubenswrapper[4798]: I0203 00:19:50.571356 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 03 00:19:50 crc kubenswrapper[4798]: I0203 00:19:50.599227 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 03 00:19:50 crc kubenswrapper[4798]: I0203 00:19:50.599235 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 03 00:19:50 crc kubenswrapper[4798]: I0203 00:19:50.603155 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 03 00:19:50 crc kubenswrapper[4798]: I0203 00:19:50.670739 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 03 00:19:50 crc kubenswrapper[4798]: I0203 00:19:50.691937 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 03 00:19:50 crc kubenswrapper[4798]: I0203 00:19:50.709606 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 03 00:19:50 crc kubenswrapper[4798]: I0203 00:19:50.724462 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 03 00:19:50 crc kubenswrapper[4798]: I0203 00:19:50.798269 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 03 00:19:50 crc kubenswrapper[4798]: I0203 00:19:50.859739 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 03 00:19:50 crc kubenswrapper[4798]: I0203 00:19:50.863509 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 03 00:19:50 crc kubenswrapper[4798]: I0203 00:19:50.975790 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 03 00:19:51 crc kubenswrapper[4798]: I0203 00:19:51.080304 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 03 00:19:51 crc kubenswrapper[4798]: I0203 00:19:51.159017 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 03 00:19:51 crc kubenswrapper[4798]: I0203 00:19:51.273346 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 03 00:19:51 crc kubenswrapper[4798]: I0203 00:19:51.284973 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 03 00:19:51 crc kubenswrapper[4798]: I0203 00:19:51.295358 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 03 00:19:51 crc kubenswrapper[4798]: I0203 00:19:51.307981 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 03 00:19:51 crc kubenswrapper[4798]: I0203 00:19:51.432079 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 03 00:19:51 crc kubenswrapper[4798]: I0203 00:19:51.551219 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 03 00:19:51 crc kubenswrapper[4798]: I0203 00:19:51.580479 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 03 00:19:51 crc kubenswrapper[4798]: I0203 00:19:51.580743 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 03 00:19:51 crc kubenswrapper[4798]: I0203 00:19:51.587892 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 03 00:19:51 crc kubenswrapper[4798]: I0203 00:19:51.597642 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 03 00:19:51 crc kubenswrapper[4798]: I0203 00:19:51.606860 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 03 00:19:51 crc kubenswrapper[4798]: I0203 00:19:51.765126 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 03 00:19:51 crc kubenswrapper[4798]: I0203 00:19:51.901583 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 03 00:19:51 crc kubenswrapper[4798]: I0203 00:19:51.972761 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 03 00:19:52 crc kubenswrapper[4798]: I0203 00:19:52.008619 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 03 00:19:52 crc kubenswrapper[4798]: I0203 00:19:52.010217 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 03 00:19:52 crc kubenswrapper[4798]: I0203 00:19:52.198076 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 03 00:19:52 crc kubenswrapper[4798]: I0203 00:19:52.242291 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 03 00:19:52 crc kubenswrapper[4798]: I0203 00:19:52.261100 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 03 00:19:52 crc kubenswrapper[4798]: I0203 00:19:52.305522 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 03 00:19:52 crc kubenswrapper[4798]: I0203 00:19:52.355321 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 03 00:19:52 crc kubenswrapper[4798]: I0203 00:19:52.361643 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 03 00:19:52 crc kubenswrapper[4798]: I0203 00:19:52.645528 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 03 00:19:52 crc kubenswrapper[4798]: I0203 00:19:52.675552 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 03 00:19:52 crc kubenswrapper[4798]: I0203 00:19:52.679405 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 03 00:19:52 crc kubenswrapper[4798]: I0203 00:19:52.732520 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 03 00:19:52 crc kubenswrapper[4798]: I0203 00:19:52.768794 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 03 00:19:52 crc kubenswrapper[4798]: I0203 00:19:52.789506 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 03 00:19:52 crc kubenswrapper[4798]: I0203 00:19:52.794967 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 03 00:19:52 crc kubenswrapper[4798]: I0203 00:19:52.846139 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 03 00:19:52 crc kubenswrapper[4798]: I0203 00:19:52.916162 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 03 00:19:52 crc kubenswrapper[4798]: I0203 00:19:52.917529 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 03 00:19:52 crc kubenswrapper[4798]: I0203 00:19:52.995779 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 03 00:19:52 crc kubenswrapper[4798]: I0203 00:19:52.997748 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 03 00:19:53 crc kubenswrapper[4798]: I0203 00:19:53.019043 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 03 00:19:53 crc kubenswrapper[4798]: I0203 00:19:53.173952 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 03 00:19:53 crc kubenswrapper[4798]: I0203 00:19:53.323786 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 03 00:19:53 crc kubenswrapper[4798]: I0203 00:19:53.385387 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 03 00:19:53 crc kubenswrapper[4798]: I0203 00:19:53.435355 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 03 00:19:53 crc kubenswrapper[4798]: I0203 00:19:53.436608 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 03 00:19:53 crc kubenswrapper[4798]: I0203 00:19:53.476589 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 03 00:19:53 crc kubenswrapper[4798]: I0203 00:19:53.493973 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 03 00:19:53 crc kubenswrapper[4798]: I0203 00:19:53.495121 4798 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 03 00:19:53 crc kubenswrapper[4798]: I0203 00:19:53.603165 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 03 00:19:53 crc kubenswrapper[4798]: I0203 00:19:53.622920 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 03 00:19:53 crc kubenswrapper[4798]: I0203 00:19:53.681220 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 03 00:19:53 crc kubenswrapper[4798]: I0203 00:19:53.708363 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 03 00:19:53 crc kubenswrapper[4798]: I0203 00:19:53.748060 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 03 00:19:53 crc kubenswrapper[4798]: I0203 00:19:53.874148 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 03 00:19:53 crc kubenswrapper[4798]: I0203 00:19:53.917685 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.172026 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.256374 4798 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.257904 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.259532 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=46.2595122 podStartE2EDuration="46.2595122s" podCreationTimestamp="2026-02-03 00:19:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:19:31.056421451 +0000 UTC m=+262.822411462" watchObservedRunningTime="2026-02-03 00:19:54.2595122 +0000 UTC m=+286.025502211" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.260778 4798 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-7m7t5"] Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.260827 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-dd79f44-59gpg","openshift-kube-apiserver/kube-apiserver-crc"] Feb 03 00:19:54 crc kubenswrapper[4798]: E0203 00:19:54.260994 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c4c1155-ba10-4dd4-95a8-105c5c7168eb" containerName="oauth-openshift" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.261011 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c4c1155-ba10-4dd4-95a8-105c5c7168eb" containerName="oauth-openshift" Feb 03 00:19:54 crc kubenswrapper[4798]: E0203 00:19:54.261025 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06b08551-eba7-488a-9123-3ae00e7a3d77" containerName="installer" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.261032 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="06b08551-eba7-488a-9123-3ae00e7a3d77" containerName="installer" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.261243 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c4c1155-ba10-4dd4-95a8-105c5c7168eb" containerName="oauth-openshift" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.261261 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="06b08551-eba7-488a-9123-3ae00e7a3d77" containerName="installer" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.261270 4798 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="84295e9a-e49f-4550-8b66-c13c1d9a31ba" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.261293 4798 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="84295e9a-e49f-4550-8b66-c13c1d9a31ba" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.261608 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.263099 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.263362 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.263736 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.263968 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.264643 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.264866 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.265142 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.265188 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.265676 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.265793 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.266402 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.266588 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.267352 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.270872 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.273053 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.279618 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.304775 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=26.304756495 podStartE2EDuration="26.304756495s" podCreationTimestamp="2026-02-03 00:19:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:19:54.300820867 +0000 UTC m=+286.066810918" watchObservedRunningTime="2026-02-03 00:19:54.304756495 +0000 UTC m=+286.070746506" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.384982 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fee4db49-c0e8-46da-acf5-cd31da9b555a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-dd79f44-59gpg\" (UID: \"fee4db49-c0e8-46da-acf5-cd31da9b555a\") " pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.385046 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvtnn\" (UniqueName: \"kubernetes.io/projected/fee4db49-c0e8-46da-acf5-cd31da9b555a-kube-api-access-fvtnn\") pod \"oauth-openshift-dd79f44-59gpg\" (UID: \"fee4db49-c0e8-46da-acf5-cd31da9b555a\") " pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.385088 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fee4db49-c0e8-46da-acf5-cd31da9b555a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-dd79f44-59gpg\" (UID: \"fee4db49-c0e8-46da-acf5-cd31da9b555a\") " pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.385525 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fee4db49-c0e8-46da-acf5-cd31da9b555a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-dd79f44-59gpg\" (UID: \"fee4db49-c0e8-46da-acf5-cd31da9b555a\") " pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.385636 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fee4db49-c0e8-46da-acf5-cd31da9b555a-v4-0-config-system-session\") pod \"oauth-openshift-dd79f44-59gpg\" (UID: \"fee4db49-c0e8-46da-acf5-cd31da9b555a\") " pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.386603 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fee4db49-c0e8-46da-acf5-cd31da9b555a-audit-dir\") pod \"oauth-openshift-dd79f44-59gpg\" (UID: \"fee4db49-c0e8-46da-acf5-cd31da9b555a\") " pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.386861 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fee4db49-c0e8-46da-acf5-cd31da9b555a-v4-0-config-system-router-certs\") pod \"oauth-openshift-dd79f44-59gpg\" (UID: \"fee4db49-c0e8-46da-acf5-cd31da9b555a\") " pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.386906 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fee4db49-c0e8-46da-acf5-cd31da9b555a-v4-0-config-user-template-error\") pod \"oauth-openshift-dd79f44-59gpg\" (UID: \"fee4db49-c0e8-46da-acf5-cd31da9b555a\") " pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.386932 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fee4db49-c0e8-46da-acf5-cd31da9b555a-v4-0-config-user-template-login\") pod \"oauth-openshift-dd79f44-59gpg\" (UID: \"fee4db49-c0e8-46da-acf5-cd31da9b555a\") " pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.386970 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fee4db49-c0e8-46da-acf5-cd31da9b555a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-dd79f44-59gpg\" (UID: \"fee4db49-c0e8-46da-acf5-cd31da9b555a\") " pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.387028 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fee4db49-c0e8-46da-acf5-cd31da9b555a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-dd79f44-59gpg\" (UID: \"fee4db49-c0e8-46da-acf5-cd31da9b555a\") " pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.387122 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fee4db49-c0e8-46da-acf5-cd31da9b555a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-dd79f44-59gpg\" (UID: \"fee4db49-c0e8-46da-acf5-cd31da9b555a\") " pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.387160 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fee4db49-c0e8-46da-acf5-cd31da9b555a-v4-0-config-system-service-ca\") pod \"oauth-openshift-dd79f44-59gpg\" (UID: \"fee4db49-c0e8-46da-acf5-cd31da9b555a\") " pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.387198 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fee4db49-c0e8-46da-acf5-cd31da9b555a-audit-policies\") pod \"oauth-openshift-dd79f44-59gpg\" (UID: \"fee4db49-c0e8-46da-acf5-cd31da9b555a\") " pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.394838 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.441803 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.488436 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fee4db49-c0e8-46da-acf5-cd31da9b555a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-dd79f44-59gpg\" (UID: \"fee4db49-c0e8-46da-acf5-cd31da9b555a\") " pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.488490 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvtnn\" (UniqueName: \"kubernetes.io/projected/fee4db49-c0e8-46da-acf5-cd31da9b555a-kube-api-access-fvtnn\") pod \"oauth-openshift-dd79f44-59gpg\" (UID: \"fee4db49-c0e8-46da-acf5-cd31da9b555a\") " pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.488513 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fee4db49-c0e8-46da-acf5-cd31da9b555a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-dd79f44-59gpg\" (UID: \"fee4db49-c0e8-46da-acf5-cd31da9b555a\") " pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.488531 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fee4db49-c0e8-46da-acf5-cd31da9b555a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-dd79f44-59gpg\" (UID: \"fee4db49-c0e8-46da-acf5-cd31da9b555a\") " pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.488568 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fee4db49-c0e8-46da-acf5-cd31da9b555a-v4-0-config-system-session\") pod \"oauth-openshift-dd79f44-59gpg\" (UID: \"fee4db49-c0e8-46da-acf5-cd31da9b555a\") " pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.488594 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fee4db49-c0e8-46da-acf5-cd31da9b555a-audit-dir\") pod \"oauth-openshift-dd79f44-59gpg\" (UID: \"fee4db49-c0e8-46da-acf5-cd31da9b555a\") " pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.488618 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fee4db49-c0e8-46da-acf5-cd31da9b555a-v4-0-config-system-router-certs\") pod \"oauth-openshift-dd79f44-59gpg\" (UID: \"fee4db49-c0e8-46da-acf5-cd31da9b555a\") " pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.488639 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fee4db49-c0e8-46da-acf5-cd31da9b555a-v4-0-config-user-template-error\") pod \"oauth-openshift-dd79f44-59gpg\" (UID: \"fee4db49-c0e8-46da-acf5-cd31da9b555a\") " pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.488684 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fee4db49-c0e8-46da-acf5-cd31da9b555a-v4-0-config-user-template-login\") pod \"oauth-openshift-dd79f44-59gpg\" (UID: \"fee4db49-c0e8-46da-acf5-cd31da9b555a\") " pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.488704 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fee4db49-c0e8-46da-acf5-cd31da9b555a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-dd79f44-59gpg\" (UID: \"fee4db49-c0e8-46da-acf5-cd31da9b555a\") " pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.488721 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fee4db49-c0e8-46da-acf5-cd31da9b555a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-dd79f44-59gpg\" (UID: \"fee4db49-c0e8-46da-acf5-cd31da9b555a\") " pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.488752 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fee4db49-c0e8-46da-acf5-cd31da9b555a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-dd79f44-59gpg\" (UID: \"fee4db49-c0e8-46da-acf5-cd31da9b555a\") " pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.488770 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fee4db49-c0e8-46da-acf5-cd31da9b555a-v4-0-config-system-service-ca\") pod \"oauth-openshift-dd79f44-59gpg\" (UID: \"fee4db49-c0e8-46da-acf5-cd31da9b555a\") " pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.488788 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fee4db49-c0e8-46da-acf5-cd31da9b555a-audit-policies\") pod \"oauth-openshift-dd79f44-59gpg\" (UID: \"fee4db49-c0e8-46da-acf5-cd31da9b555a\") " pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.488842 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fee4db49-c0e8-46da-acf5-cd31da9b555a-audit-dir\") pod \"oauth-openshift-dd79f44-59gpg\" (UID: \"fee4db49-c0e8-46da-acf5-cd31da9b555a\") " pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.489750 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fee4db49-c0e8-46da-acf5-cd31da9b555a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-dd79f44-59gpg\" (UID: \"fee4db49-c0e8-46da-acf5-cd31da9b555a\") " pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.490092 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fee4db49-c0e8-46da-acf5-cd31da9b555a-v4-0-config-system-service-ca\") pod \"oauth-openshift-dd79f44-59gpg\" (UID: \"fee4db49-c0e8-46da-acf5-cd31da9b555a\") " pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.491084 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fee4db49-c0e8-46da-acf5-cd31da9b555a-audit-policies\") pod \"oauth-openshift-dd79f44-59gpg\" (UID: \"fee4db49-c0e8-46da-acf5-cd31da9b555a\") " pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.492155 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fee4db49-c0e8-46da-acf5-cd31da9b555a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-dd79f44-59gpg\" (UID: \"fee4db49-c0e8-46da-acf5-cd31da9b555a\") " pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.495417 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fee4db49-c0e8-46da-acf5-cd31da9b555a-v4-0-config-user-template-error\") pod \"oauth-openshift-dd79f44-59gpg\" (UID: \"fee4db49-c0e8-46da-acf5-cd31da9b555a\") " pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.495693 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fee4db49-c0e8-46da-acf5-cd31da9b555a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-dd79f44-59gpg\" (UID: \"fee4db49-c0e8-46da-acf5-cd31da9b555a\") " pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.495865 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fee4db49-c0e8-46da-acf5-cd31da9b555a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-dd79f44-59gpg\" (UID: \"fee4db49-c0e8-46da-acf5-cd31da9b555a\") " pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.496240 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fee4db49-c0e8-46da-acf5-cd31da9b555a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-dd79f44-59gpg\" (UID: \"fee4db49-c0e8-46da-acf5-cd31da9b555a\") " pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.496715 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fee4db49-c0e8-46da-acf5-cd31da9b555a-v4-0-config-user-template-login\") pod \"oauth-openshift-dd79f44-59gpg\" (UID: \"fee4db49-c0e8-46da-acf5-cd31da9b555a\") " pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.496984 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fee4db49-c0e8-46da-acf5-cd31da9b555a-v4-0-config-system-router-certs\") pod \"oauth-openshift-dd79f44-59gpg\" (UID: \"fee4db49-c0e8-46da-acf5-cd31da9b555a\") " pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.499213 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fee4db49-c0e8-46da-acf5-cd31da9b555a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-dd79f44-59gpg\" (UID: \"fee4db49-c0e8-46da-acf5-cd31da9b555a\") " pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.499317 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fee4db49-c0e8-46da-acf5-cd31da9b555a-v4-0-config-system-session\") pod \"oauth-openshift-dd79f44-59gpg\" (UID: \"fee4db49-c0e8-46da-acf5-cd31da9b555a\") " pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.508854 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvtnn\" (UniqueName: \"kubernetes.io/projected/fee4db49-c0e8-46da-acf5-cd31da9b555a-kube-api-access-fvtnn\") pod \"oauth-openshift-dd79f44-59gpg\" (UID: \"fee4db49-c0e8-46da-acf5-cd31da9b555a\") " pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.513011 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.549153 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.578375 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.625278 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.771429 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-dd79f44-59gpg"] Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.788444 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 03 00:19:54 crc kubenswrapper[4798]: I0203 00:19:54.917481 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c4c1155-ba10-4dd4-95a8-105c5c7168eb" path="/var/lib/kubelet/pods/0c4c1155-ba10-4dd4-95a8-105c5c7168eb/volumes" Feb 03 00:19:55 crc kubenswrapper[4798]: I0203 00:19:55.104185 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 03 00:19:55 crc kubenswrapper[4798]: I0203 00:19:55.117706 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 03 00:19:55 crc kubenswrapper[4798]: I0203 00:19:55.155949 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 03 00:19:55 crc kubenswrapper[4798]: I0203 00:19:55.169754 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 03 00:19:55 crc kubenswrapper[4798]: I0203 00:19:55.195882 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 03 00:19:55 crc kubenswrapper[4798]: I0203 00:19:55.279688 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 03 00:19:55 crc kubenswrapper[4798]: I0203 00:19:55.298711 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 03 00:19:55 crc kubenswrapper[4798]: I0203 00:19:55.308889 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 03 00:19:55 crc kubenswrapper[4798]: I0203 00:19:55.541343 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" event={"ID":"fee4db49-c0e8-46da-acf5-cd31da9b555a","Type":"ContainerStarted","Data":"4544549a149f1f7bb6462ba553930a6b1c555eaa1b069fadb138631f46080dc4"} Feb 03 00:19:55 crc kubenswrapper[4798]: I0203 00:19:55.541377 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" event={"ID":"fee4db49-c0e8-46da-acf5-cd31da9b555a","Type":"ContainerStarted","Data":"1afc443f2b9d8554cd48d29ea2b529d5ce2b1afde3c505d647ad6145254f7929"} Feb 03 00:19:55 crc kubenswrapper[4798]: I0203 00:19:55.541609 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:55 crc kubenswrapper[4798]: I0203 00:19:55.549076 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" Feb 03 00:19:55 crc kubenswrapper[4798]: I0203 00:19:55.571314 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-dd79f44-59gpg" podStartSLOduration=57.571294344 podStartE2EDuration="57.571294344s" podCreationTimestamp="2026-02-03 00:18:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:19:55.567404049 +0000 UTC m=+287.333394100" watchObservedRunningTime="2026-02-03 00:19:55.571294344 +0000 UTC m=+287.337284365" Feb 03 00:19:55 crc kubenswrapper[4798]: I0203 00:19:55.628728 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 03 00:19:55 crc kubenswrapper[4798]: I0203 00:19:55.696589 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 03 00:19:55 crc kubenswrapper[4798]: I0203 00:19:55.928932 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 03 00:19:55 crc kubenswrapper[4798]: I0203 00:19:55.979756 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 03 00:19:56 crc kubenswrapper[4798]: I0203 00:19:56.180946 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 03 00:19:56 crc kubenswrapper[4798]: I0203 00:19:56.393003 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 03 00:19:56 crc kubenswrapper[4798]: I0203 00:19:56.476096 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 03 00:19:56 crc kubenswrapper[4798]: I0203 00:19:56.511229 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 03 00:19:56 crc kubenswrapper[4798]: I0203 00:19:56.659646 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 03 00:19:56 crc kubenswrapper[4798]: I0203 00:19:56.690788 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 03 00:19:56 crc kubenswrapper[4798]: I0203 00:19:56.847386 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 03 00:19:56 crc kubenswrapper[4798]: I0203 00:19:56.920604 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 00:19:56 crc kubenswrapper[4798]: I0203 00:19:56.920686 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 00:19:56 crc kubenswrapper[4798]: I0203 00:19:56.927281 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 00:19:57 crc kubenswrapper[4798]: I0203 00:19:57.163350 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 03 00:19:57 crc kubenswrapper[4798]: I0203 00:19:57.559849 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 00:19:57 crc kubenswrapper[4798]: I0203 00:19:57.757938 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 03 00:20:03 crc kubenswrapper[4798]: I0203 00:20:03.708534 4798 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 03 00:20:03 crc kubenswrapper[4798]: I0203 00:20:03.709293 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://2d8b5a2eaa8acb2f676b2c6424957966d9cb2e1924383f389ea6961d2938e2f4" gracePeriod=5 Feb 03 00:20:08 crc kubenswrapper[4798]: I0203 00:20:08.707453 4798 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 03 00:20:09 crc kubenswrapper[4798]: I0203 00:20:09.285048 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 03 00:20:09 crc kubenswrapper[4798]: I0203 00:20:09.285172 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 00:20:09 crc kubenswrapper[4798]: I0203 00:20:09.485479 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 03 00:20:09 crc kubenswrapper[4798]: I0203 00:20:09.485886 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 03 00:20:09 crc kubenswrapper[4798]: I0203 00:20:09.485940 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 03 00:20:09 crc kubenswrapper[4798]: I0203 00:20:09.486004 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 03 00:20:09 crc kubenswrapper[4798]: I0203 00:20:09.486151 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 03 00:20:09 crc kubenswrapper[4798]: I0203 00:20:09.486013 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 00:20:09 crc kubenswrapper[4798]: I0203 00:20:09.486066 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 00:20:09 crc kubenswrapper[4798]: I0203 00:20:09.486077 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 00:20:09 crc kubenswrapper[4798]: I0203 00:20:09.486196 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 00:20:09 crc kubenswrapper[4798]: I0203 00:20:09.486504 4798 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 03 00:20:09 crc kubenswrapper[4798]: I0203 00:20:09.486525 4798 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 03 00:20:09 crc kubenswrapper[4798]: I0203 00:20:09.486539 4798 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 03 00:20:09 crc kubenswrapper[4798]: I0203 00:20:09.486552 4798 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 03 00:20:09 crc kubenswrapper[4798]: I0203 00:20:09.496682 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 00:20:09 crc kubenswrapper[4798]: I0203 00:20:09.587853 4798 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 03 00:20:09 crc kubenswrapper[4798]: I0203 00:20:09.623412 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 03 00:20:09 crc kubenswrapper[4798]: I0203 00:20:09.623455 4798 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="2d8b5a2eaa8acb2f676b2c6424957966d9cb2e1924383f389ea6961d2938e2f4" exitCode=137 Feb 03 00:20:09 crc kubenswrapper[4798]: I0203 00:20:09.623494 4798 scope.go:117] "RemoveContainer" containerID="2d8b5a2eaa8acb2f676b2c6424957966d9cb2e1924383f389ea6961d2938e2f4" Feb 03 00:20:09 crc kubenswrapper[4798]: I0203 00:20:09.623605 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 00:20:09 crc kubenswrapper[4798]: I0203 00:20:09.639703 4798 scope.go:117] "RemoveContainer" containerID="2d8b5a2eaa8acb2f676b2c6424957966d9cb2e1924383f389ea6961d2938e2f4" Feb 03 00:20:09 crc kubenswrapper[4798]: E0203 00:20:09.640227 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d8b5a2eaa8acb2f676b2c6424957966d9cb2e1924383f389ea6961d2938e2f4\": container with ID starting with 2d8b5a2eaa8acb2f676b2c6424957966d9cb2e1924383f389ea6961d2938e2f4 not found: ID does not exist" containerID="2d8b5a2eaa8acb2f676b2c6424957966d9cb2e1924383f389ea6961d2938e2f4" Feb 03 00:20:09 crc kubenswrapper[4798]: I0203 00:20:09.640262 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d8b5a2eaa8acb2f676b2c6424957966d9cb2e1924383f389ea6961d2938e2f4"} err="failed to get container status \"2d8b5a2eaa8acb2f676b2c6424957966d9cb2e1924383f389ea6961d2938e2f4\": rpc error: code = NotFound desc = could not find container \"2d8b5a2eaa8acb2f676b2c6424957966d9cb2e1924383f389ea6961d2938e2f4\": container with ID starting with 2d8b5a2eaa8acb2f676b2c6424957966d9cb2e1924383f389ea6961d2938e2f4 not found: ID does not exist" Feb 03 00:20:10 crc kubenswrapper[4798]: I0203 00:20:10.914933 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 03 00:20:10 crc kubenswrapper[4798]: I0203 00:20:10.915225 4798 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 03 00:20:10 crc kubenswrapper[4798]: I0203 00:20:10.924973 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 03 00:20:10 crc kubenswrapper[4798]: I0203 00:20:10.925008 4798 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="713d1eff-58f8-4a44-9a48-058aeeb846ed" Feb 03 00:20:10 crc kubenswrapper[4798]: I0203 00:20:10.928423 4798 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 03 00:20:10 crc kubenswrapper[4798]: I0203 00:20:10.928451 4798 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="713d1eff-58f8-4a44-9a48-058aeeb846ed" Feb 03 00:20:12 crc kubenswrapper[4798]: I0203 00:20:12.268640 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 03 00:20:18 crc kubenswrapper[4798]: I0203 00:20:18.229212 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 03 00:20:19 crc kubenswrapper[4798]: I0203 00:20:19.695608 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 03 00:20:19 crc kubenswrapper[4798]: I0203 00:20:19.697290 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 03 00:20:19 crc kubenswrapper[4798]: I0203 00:20:19.697336 4798 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="49516a4e59f4000be756a5fed37645d099cd1780922423495014a40579a4307e" exitCode=137 Feb 03 00:20:19 crc kubenswrapper[4798]: I0203 00:20:19.697367 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"49516a4e59f4000be756a5fed37645d099cd1780922423495014a40579a4307e"} Feb 03 00:20:19 crc kubenswrapper[4798]: I0203 00:20:19.697403 4798 scope.go:117] "RemoveContainer" containerID="1da94f34ed139edbb39d3e777c72192e4d6d444b191e109b4c72ccb945513a53" Feb 03 00:20:20 crc kubenswrapper[4798]: I0203 00:20:20.710348 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 03 00:20:20 crc kubenswrapper[4798]: I0203 00:20:20.712416 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"53038f29562bd6d76f8aef31f4c118825ff611c577b1faf4c83e5eac90a69393"} Feb 03 00:20:29 crc kubenswrapper[4798]: I0203 00:20:29.194025 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 00:20:29 crc kubenswrapper[4798]: I0203 00:20:29.197485 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 00:20:29 crc kubenswrapper[4798]: I0203 00:20:29.761798 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 00:20:29 crc kubenswrapper[4798]: I0203 00:20:29.765964 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 00:20:39 crc kubenswrapper[4798]: I0203 00:20:39.302187 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tpkwn"] Feb 03 00:20:39 crc kubenswrapper[4798]: I0203 00:20:39.303002 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-tpkwn" podUID="0116d04c-adc4-4adc-ab03-21058672d6e8" containerName="controller-manager" containerID="cri-o://02d7498df0a89ea559b1312b3d81d273f3db8666294c417a2d17de215e5f4d77" gracePeriod=30 Feb 03 00:20:39 crc kubenswrapper[4798]: I0203 00:20:39.366342 4798 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-tpkwn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 03 00:20:39 crc kubenswrapper[4798]: I0203 00:20:39.366757 4798 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-tpkwn" podUID="0116d04c-adc4-4adc-ab03-21058672d6e8" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 03 00:20:39 crc kubenswrapper[4798]: I0203 00:20:39.413998 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-6pxdl"] Feb 03 00:20:39 crc kubenswrapper[4798]: I0203 00:20:39.414268 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6pxdl" podUID="46425fd6-9499-4e9a-8450-3fafbe2c6611" containerName="route-controller-manager" containerID="cri-o://ef3f860bc643168979d68d02ced0629e3778569f2a0f9f1e18469839cb692859" gracePeriod=30 Feb 03 00:20:39 crc kubenswrapper[4798]: I0203 00:20:39.755539 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tpkwn" Feb 03 00:20:39 crc kubenswrapper[4798]: I0203 00:20:39.812430 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6pxdl" Feb 03 00:20:39 crc kubenswrapper[4798]: I0203 00:20:39.820895 4798 generic.go:334] "Generic (PLEG): container finished" podID="46425fd6-9499-4e9a-8450-3fafbe2c6611" containerID="ef3f860bc643168979d68d02ced0629e3778569f2a0f9f1e18469839cb692859" exitCode=0 Feb 03 00:20:39 crc kubenswrapper[4798]: I0203 00:20:39.820965 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6pxdl" Feb 03 00:20:39 crc kubenswrapper[4798]: I0203 00:20:39.821001 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6pxdl" event={"ID":"46425fd6-9499-4e9a-8450-3fafbe2c6611","Type":"ContainerDied","Data":"ef3f860bc643168979d68d02ced0629e3778569f2a0f9f1e18469839cb692859"} Feb 03 00:20:39 crc kubenswrapper[4798]: I0203 00:20:39.821033 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6pxdl" event={"ID":"46425fd6-9499-4e9a-8450-3fafbe2c6611","Type":"ContainerDied","Data":"d75a8510ffa24831b912d8d89149db14e0220f16d9853f49aead6ee19bbb3113"} Feb 03 00:20:39 crc kubenswrapper[4798]: I0203 00:20:39.821063 4798 scope.go:117] "RemoveContainer" containerID="ef3f860bc643168979d68d02ced0629e3778569f2a0f9f1e18469839cb692859" Feb 03 00:20:39 crc kubenswrapper[4798]: I0203 00:20:39.823048 4798 generic.go:334] "Generic (PLEG): container finished" podID="0116d04c-adc4-4adc-ab03-21058672d6e8" containerID="02d7498df0a89ea559b1312b3d81d273f3db8666294c417a2d17de215e5f4d77" exitCode=0 Feb 03 00:20:39 crc kubenswrapper[4798]: I0203 00:20:39.823090 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tpkwn" event={"ID":"0116d04c-adc4-4adc-ab03-21058672d6e8","Type":"ContainerDied","Data":"02d7498df0a89ea559b1312b3d81d273f3db8666294c417a2d17de215e5f4d77"} Feb 03 00:20:39 crc kubenswrapper[4798]: I0203 00:20:39.823101 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tpkwn" Feb 03 00:20:39 crc kubenswrapper[4798]: I0203 00:20:39.823119 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tpkwn" event={"ID":"0116d04c-adc4-4adc-ab03-21058672d6e8","Type":"ContainerDied","Data":"447985d593cd3de88977e310cdad52a4f5db74c1adad996c9062d1232e20fd97"} Feb 03 00:20:39 crc kubenswrapper[4798]: I0203 00:20:39.840983 4798 scope.go:117] "RemoveContainer" containerID="ef3f860bc643168979d68d02ced0629e3778569f2a0f9f1e18469839cb692859" Feb 03 00:20:39 crc kubenswrapper[4798]: E0203 00:20:39.841446 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef3f860bc643168979d68d02ced0629e3778569f2a0f9f1e18469839cb692859\": container with ID starting with ef3f860bc643168979d68d02ced0629e3778569f2a0f9f1e18469839cb692859 not found: ID does not exist" containerID="ef3f860bc643168979d68d02ced0629e3778569f2a0f9f1e18469839cb692859" Feb 03 00:20:39 crc kubenswrapper[4798]: I0203 00:20:39.841521 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef3f860bc643168979d68d02ced0629e3778569f2a0f9f1e18469839cb692859"} err="failed to get container status \"ef3f860bc643168979d68d02ced0629e3778569f2a0f9f1e18469839cb692859\": rpc error: code = NotFound desc = could not find container \"ef3f860bc643168979d68d02ced0629e3778569f2a0f9f1e18469839cb692859\": container with ID starting with ef3f860bc643168979d68d02ced0629e3778569f2a0f9f1e18469839cb692859 not found: ID does not exist" Feb 03 00:20:39 crc kubenswrapper[4798]: I0203 00:20:39.841559 4798 scope.go:117] "RemoveContainer" containerID="02d7498df0a89ea559b1312b3d81d273f3db8666294c417a2d17de215e5f4d77" Feb 03 00:20:39 crc kubenswrapper[4798]: I0203 00:20:39.866625 4798 scope.go:117] "RemoveContainer" containerID="02d7498df0a89ea559b1312b3d81d273f3db8666294c417a2d17de215e5f4d77" Feb 03 00:20:39 crc kubenswrapper[4798]: E0203 00:20:39.867580 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02d7498df0a89ea559b1312b3d81d273f3db8666294c417a2d17de215e5f4d77\": container with ID starting with 02d7498df0a89ea559b1312b3d81d273f3db8666294c417a2d17de215e5f4d77 not found: ID does not exist" containerID="02d7498df0a89ea559b1312b3d81d273f3db8666294c417a2d17de215e5f4d77" Feb 03 00:20:39 crc kubenswrapper[4798]: I0203 00:20:39.867620 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02d7498df0a89ea559b1312b3d81d273f3db8666294c417a2d17de215e5f4d77"} err="failed to get container status \"02d7498df0a89ea559b1312b3d81d273f3db8666294c417a2d17de215e5f4d77\": rpc error: code = NotFound desc = could not find container \"02d7498df0a89ea559b1312b3d81d273f3db8666294c417a2d17de215e5f4d77\": container with ID starting with 02d7498df0a89ea559b1312b3d81d273f3db8666294c417a2d17de215e5f4d77 not found: ID does not exist" Feb 03 00:20:39 crc kubenswrapper[4798]: I0203 00:20:39.909288 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0116d04c-adc4-4adc-ab03-21058672d6e8-client-ca\") pod \"0116d04c-adc4-4adc-ab03-21058672d6e8\" (UID: \"0116d04c-adc4-4adc-ab03-21058672d6e8\") " Feb 03 00:20:39 crc kubenswrapper[4798]: I0203 00:20:39.910645 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2q9c\" (UniqueName: \"kubernetes.io/projected/0116d04c-adc4-4adc-ab03-21058672d6e8-kube-api-access-t2q9c\") pod \"0116d04c-adc4-4adc-ab03-21058672d6e8\" (UID: \"0116d04c-adc4-4adc-ab03-21058672d6e8\") " Feb 03 00:20:39 crc kubenswrapper[4798]: I0203 00:20:39.910704 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0116d04c-adc4-4adc-ab03-21058672d6e8-serving-cert\") pod \"0116d04c-adc4-4adc-ab03-21058672d6e8\" (UID: \"0116d04c-adc4-4adc-ab03-21058672d6e8\") " Feb 03 00:20:39 crc kubenswrapper[4798]: I0203 00:20:39.910845 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0116d04c-adc4-4adc-ab03-21058672d6e8-config\") pod \"0116d04c-adc4-4adc-ab03-21058672d6e8\" (UID: \"0116d04c-adc4-4adc-ab03-21058672d6e8\") " Feb 03 00:20:39 crc kubenswrapper[4798]: I0203 00:20:39.910881 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmbbx\" (UniqueName: \"kubernetes.io/projected/46425fd6-9499-4e9a-8450-3fafbe2c6611-kube-api-access-fmbbx\") pod \"46425fd6-9499-4e9a-8450-3fafbe2c6611\" (UID: \"46425fd6-9499-4e9a-8450-3fafbe2c6611\") " Feb 03 00:20:39 crc kubenswrapper[4798]: I0203 00:20:39.910580 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0116d04c-adc4-4adc-ab03-21058672d6e8-client-ca" (OuterVolumeSpecName: "client-ca") pod "0116d04c-adc4-4adc-ab03-21058672d6e8" (UID: "0116d04c-adc4-4adc-ab03-21058672d6e8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:20:39 crc kubenswrapper[4798]: I0203 00:20:39.912222 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0116d04c-adc4-4adc-ab03-21058672d6e8-proxy-ca-bundles\") pod \"0116d04c-adc4-4adc-ab03-21058672d6e8\" (UID: \"0116d04c-adc4-4adc-ab03-21058672d6e8\") " Feb 03 00:20:39 crc kubenswrapper[4798]: I0203 00:20:39.912258 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46425fd6-9499-4e9a-8450-3fafbe2c6611-serving-cert\") pod \"46425fd6-9499-4e9a-8450-3fafbe2c6611\" (UID: \"46425fd6-9499-4e9a-8450-3fafbe2c6611\") " Feb 03 00:20:39 crc kubenswrapper[4798]: I0203 00:20:39.912300 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46425fd6-9499-4e9a-8450-3fafbe2c6611-config\") pod \"46425fd6-9499-4e9a-8450-3fafbe2c6611\" (UID: \"46425fd6-9499-4e9a-8450-3fafbe2c6611\") " Feb 03 00:20:39 crc kubenswrapper[4798]: I0203 00:20:39.912324 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46425fd6-9499-4e9a-8450-3fafbe2c6611-client-ca\") pod \"46425fd6-9499-4e9a-8450-3fafbe2c6611\" (UID: \"46425fd6-9499-4e9a-8450-3fafbe2c6611\") " Feb 03 00:20:39 crc kubenswrapper[4798]: I0203 00:20:39.912726 4798 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0116d04c-adc4-4adc-ab03-21058672d6e8-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 00:20:39 crc kubenswrapper[4798]: I0203 00:20:39.912744 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0116d04c-adc4-4adc-ab03-21058672d6e8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0116d04c-adc4-4adc-ab03-21058672d6e8" (UID: "0116d04c-adc4-4adc-ab03-21058672d6e8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:20:39 crc kubenswrapper[4798]: I0203 00:20:39.912471 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0116d04c-adc4-4adc-ab03-21058672d6e8-config" (OuterVolumeSpecName: "config") pod "0116d04c-adc4-4adc-ab03-21058672d6e8" (UID: "0116d04c-adc4-4adc-ab03-21058672d6e8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:20:39 crc kubenswrapper[4798]: I0203 00:20:39.913203 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46425fd6-9499-4e9a-8450-3fafbe2c6611-client-ca" (OuterVolumeSpecName: "client-ca") pod "46425fd6-9499-4e9a-8450-3fafbe2c6611" (UID: "46425fd6-9499-4e9a-8450-3fafbe2c6611"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:20:39 crc kubenswrapper[4798]: I0203 00:20:39.913337 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46425fd6-9499-4e9a-8450-3fafbe2c6611-config" (OuterVolumeSpecName: "config") pod "46425fd6-9499-4e9a-8450-3fafbe2c6611" (UID: "46425fd6-9499-4e9a-8450-3fafbe2c6611"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:20:39 crc kubenswrapper[4798]: I0203 00:20:39.917098 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0116d04c-adc4-4adc-ab03-21058672d6e8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0116d04c-adc4-4adc-ab03-21058672d6e8" (UID: "0116d04c-adc4-4adc-ab03-21058672d6e8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:20:39 crc kubenswrapper[4798]: I0203 00:20:39.917125 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0116d04c-adc4-4adc-ab03-21058672d6e8-kube-api-access-t2q9c" (OuterVolumeSpecName: "kube-api-access-t2q9c") pod "0116d04c-adc4-4adc-ab03-21058672d6e8" (UID: "0116d04c-adc4-4adc-ab03-21058672d6e8"). InnerVolumeSpecName "kube-api-access-t2q9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:20:39 crc kubenswrapper[4798]: I0203 00:20:39.917415 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46425fd6-9499-4e9a-8450-3fafbe2c6611-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "46425fd6-9499-4e9a-8450-3fafbe2c6611" (UID: "46425fd6-9499-4e9a-8450-3fafbe2c6611"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:20:39 crc kubenswrapper[4798]: I0203 00:20:39.919104 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46425fd6-9499-4e9a-8450-3fafbe2c6611-kube-api-access-fmbbx" (OuterVolumeSpecName: "kube-api-access-fmbbx") pod "46425fd6-9499-4e9a-8450-3fafbe2c6611" (UID: "46425fd6-9499-4e9a-8450-3fafbe2c6611"). InnerVolumeSpecName "kube-api-access-fmbbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.013566 4798 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0116d04c-adc4-4adc-ab03-21058672d6e8-config\") on node \"crc\" DevicePath \"\"" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.013614 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmbbx\" (UniqueName: \"kubernetes.io/projected/46425fd6-9499-4e9a-8450-3fafbe2c6611-kube-api-access-fmbbx\") on node \"crc\" DevicePath \"\"" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.013628 4798 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0116d04c-adc4-4adc-ab03-21058672d6e8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.013637 4798 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46425fd6-9499-4e9a-8450-3fafbe2c6611-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.013684 4798 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46425fd6-9499-4e9a-8450-3fafbe2c6611-config\") on node \"crc\" DevicePath \"\"" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.013695 4798 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46425fd6-9499-4e9a-8450-3fafbe2c6611-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.013703 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2q9c\" (UniqueName: \"kubernetes.io/projected/0116d04c-adc4-4adc-ab03-21058672d6e8-kube-api-access-t2q9c\") on node \"crc\" DevicePath \"\"" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.013712 4798 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0116d04c-adc4-4adc-ab03-21058672d6e8-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.150829 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-6pxdl"] Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.154561 4798 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-6pxdl"] Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.172075 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tpkwn"] Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.176064 4798 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tpkwn"] Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.565708 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7d9d547c5-ps4r4"] Feb 03 00:20:40 crc kubenswrapper[4798]: E0203 00:20:40.565911 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.565922 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 03 00:20:40 crc kubenswrapper[4798]: E0203 00:20:40.565939 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46425fd6-9499-4e9a-8450-3fafbe2c6611" containerName="route-controller-manager" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.565945 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="46425fd6-9499-4e9a-8450-3fafbe2c6611" containerName="route-controller-manager" Feb 03 00:20:40 crc kubenswrapper[4798]: E0203 00:20:40.565959 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0116d04c-adc4-4adc-ab03-21058672d6e8" containerName="controller-manager" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.565965 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="0116d04c-adc4-4adc-ab03-21058672d6e8" containerName="controller-manager" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.566054 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.566065 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="46425fd6-9499-4e9a-8450-3fafbe2c6611" containerName="route-controller-manager" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.566078 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="0116d04c-adc4-4adc-ab03-21058672d6e8" containerName="controller-manager" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.566408 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d9d547c5-ps4r4" Feb 03 00:20:40 crc kubenswrapper[4798]: W0203 00:20:40.569691 4798 reflector.go:561] object-"openshift-controller-manager"/"client-ca": failed to list *v1.ConfigMap: configmaps "client-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 03 00:20:40 crc kubenswrapper[4798]: E0203 00:20:40.569735 4798 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"client-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 03 00:20:40 crc kubenswrapper[4798]: W0203 00:20:40.569794 4798 reflector.go:561] object-"openshift-controller-manager"/"openshift-global-ca": failed to list *v1.ConfigMap: configmaps "openshift-global-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 03 00:20:40 crc kubenswrapper[4798]: E0203 00:20:40.569808 4798 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-global-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-global-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 03 00:20:40 crc kubenswrapper[4798]: W0203 00:20:40.569938 4798 reflector.go:561] object-"openshift-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 03 00:20:40 crc kubenswrapper[4798]: E0203 00:20:40.570010 4798 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.570475 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55b5775f89-psvr8"] Feb 03 00:20:40 crc kubenswrapper[4798]: W0203 00:20:40.570645 4798 reflector.go:561] object-"openshift-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 03 00:20:40 crc kubenswrapper[4798]: E0203 00:20:40.570688 4798 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 03 00:20:40 crc kubenswrapper[4798]: W0203 00:20:40.570749 4798 reflector.go:561] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c": failed to list *v1.Secret: secrets "openshift-controller-manager-sa-dockercfg-msq4c" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 03 00:20:40 crc kubenswrapper[4798]: E0203 00:20:40.570765 4798 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-msq4c\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-controller-manager-sa-dockercfg-msq4c\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 03 00:20:40 crc kubenswrapper[4798]: W0203 00:20:40.570840 4798 reflector.go:561] object-"openshift-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 03 00:20:40 crc kubenswrapper[4798]: E0203 00:20:40.570851 4798 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.571120 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55b5775f89-psvr8" Feb 03 00:20:40 crc kubenswrapper[4798]: W0203 00:20:40.571333 4798 reflector.go:561] object-"openshift-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 03 00:20:40 crc kubenswrapper[4798]: E0203 00:20:40.571364 4798 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.573878 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.574142 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.575278 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.575712 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.575824 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.576613 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.582718 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d9d547c5-ps4r4"] Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.592880 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55b5775f89-psvr8"] Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.619952 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfa4acc9-31b6-4612-a044-32f3b9acb689-serving-cert\") pod \"controller-manager-7d9d547c5-ps4r4\" (UID: \"dfa4acc9-31b6-4612-a044-32f3b9acb689\") " pod="openshift-controller-manager/controller-manager-7d9d547c5-ps4r4" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.620018 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32508b48-f159-4a5a-898d-3d5ad26ac7c9-config\") pod \"route-controller-manager-55b5775f89-psvr8\" (UID: \"32508b48-f159-4a5a-898d-3d5ad26ac7c9\") " pod="openshift-route-controller-manager/route-controller-manager-55b5775f89-psvr8" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.620081 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dfa4acc9-31b6-4612-a044-32f3b9acb689-proxy-ca-bundles\") pod \"controller-manager-7d9d547c5-ps4r4\" (UID: \"dfa4acc9-31b6-4612-a044-32f3b9acb689\") " pod="openshift-controller-manager/controller-manager-7d9d547c5-ps4r4" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.620110 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32508b48-f159-4a5a-898d-3d5ad26ac7c9-client-ca\") pod \"route-controller-manager-55b5775f89-psvr8\" (UID: \"32508b48-f159-4a5a-898d-3d5ad26ac7c9\") " pod="openshift-route-controller-manager/route-controller-manager-55b5775f89-psvr8" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.620160 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dfa4acc9-31b6-4612-a044-32f3b9acb689-client-ca\") pod \"controller-manager-7d9d547c5-ps4r4\" (UID: \"dfa4acc9-31b6-4612-a044-32f3b9acb689\") " pod="openshift-controller-manager/controller-manager-7d9d547c5-ps4r4" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.620207 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfa4acc9-31b6-4612-a044-32f3b9acb689-config\") pod \"controller-manager-7d9d547c5-ps4r4\" (UID: \"dfa4acc9-31b6-4612-a044-32f3b9acb689\") " pod="openshift-controller-manager/controller-manager-7d9d547c5-ps4r4" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.620239 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22ghk\" (UniqueName: \"kubernetes.io/projected/32508b48-f159-4a5a-898d-3d5ad26ac7c9-kube-api-access-22ghk\") pod \"route-controller-manager-55b5775f89-psvr8\" (UID: \"32508b48-f159-4a5a-898d-3d5ad26ac7c9\") " pod="openshift-route-controller-manager/route-controller-manager-55b5775f89-psvr8" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.620275 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32508b48-f159-4a5a-898d-3d5ad26ac7c9-serving-cert\") pod \"route-controller-manager-55b5775f89-psvr8\" (UID: \"32508b48-f159-4a5a-898d-3d5ad26ac7c9\") " pod="openshift-route-controller-manager/route-controller-manager-55b5775f89-psvr8" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.620364 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhpts\" (UniqueName: \"kubernetes.io/projected/dfa4acc9-31b6-4612-a044-32f3b9acb689-kube-api-access-fhpts\") pod \"controller-manager-7d9d547c5-ps4r4\" (UID: \"dfa4acc9-31b6-4612-a044-32f3b9acb689\") " pod="openshift-controller-manager/controller-manager-7d9d547c5-ps4r4" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.721174 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dfa4acc9-31b6-4612-a044-32f3b9acb689-proxy-ca-bundles\") pod \"controller-manager-7d9d547c5-ps4r4\" (UID: \"dfa4acc9-31b6-4612-a044-32f3b9acb689\") " pod="openshift-controller-manager/controller-manager-7d9d547c5-ps4r4" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.721223 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32508b48-f159-4a5a-898d-3d5ad26ac7c9-client-ca\") pod \"route-controller-manager-55b5775f89-psvr8\" (UID: \"32508b48-f159-4a5a-898d-3d5ad26ac7c9\") " pod="openshift-route-controller-manager/route-controller-manager-55b5775f89-psvr8" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.721263 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dfa4acc9-31b6-4612-a044-32f3b9acb689-client-ca\") pod \"controller-manager-7d9d547c5-ps4r4\" (UID: \"dfa4acc9-31b6-4612-a044-32f3b9acb689\") " pod="openshift-controller-manager/controller-manager-7d9d547c5-ps4r4" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.721297 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfa4acc9-31b6-4612-a044-32f3b9acb689-config\") pod \"controller-manager-7d9d547c5-ps4r4\" (UID: \"dfa4acc9-31b6-4612-a044-32f3b9acb689\") " pod="openshift-controller-manager/controller-manager-7d9d547c5-ps4r4" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.721319 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22ghk\" (UniqueName: \"kubernetes.io/projected/32508b48-f159-4a5a-898d-3d5ad26ac7c9-kube-api-access-22ghk\") pod \"route-controller-manager-55b5775f89-psvr8\" (UID: \"32508b48-f159-4a5a-898d-3d5ad26ac7c9\") " pod="openshift-route-controller-manager/route-controller-manager-55b5775f89-psvr8" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.721350 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32508b48-f159-4a5a-898d-3d5ad26ac7c9-serving-cert\") pod \"route-controller-manager-55b5775f89-psvr8\" (UID: \"32508b48-f159-4a5a-898d-3d5ad26ac7c9\") " pod="openshift-route-controller-manager/route-controller-manager-55b5775f89-psvr8" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.721408 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhpts\" (UniqueName: \"kubernetes.io/projected/dfa4acc9-31b6-4612-a044-32f3b9acb689-kube-api-access-fhpts\") pod \"controller-manager-7d9d547c5-ps4r4\" (UID: \"dfa4acc9-31b6-4612-a044-32f3b9acb689\") " pod="openshift-controller-manager/controller-manager-7d9d547c5-ps4r4" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.721434 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfa4acc9-31b6-4612-a044-32f3b9acb689-serving-cert\") pod \"controller-manager-7d9d547c5-ps4r4\" (UID: \"dfa4acc9-31b6-4612-a044-32f3b9acb689\") " pod="openshift-controller-manager/controller-manager-7d9d547c5-ps4r4" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.721460 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32508b48-f159-4a5a-898d-3d5ad26ac7c9-config\") pod \"route-controller-manager-55b5775f89-psvr8\" (UID: \"32508b48-f159-4a5a-898d-3d5ad26ac7c9\") " pod="openshift-route-controller-manager/route-controller-manager-55b5775f89-psvr8" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.722756 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32508b48-f159-4a5a-898d-3d5ad26ac7c9-client-ca\") pod \"route-controller-manager-55b5775f89-psvr8\" (UID: \"32508b48-f159-4a5a-898d-3d5ad26ac7c9\") " pod="openshift-route-controller-manager/route-controller-manager-55b5775f89-psvr8" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.723091 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32508b48-f159-4a5a-898d-3d5ad26ac7c9-config\") pod \"route-controller-manager-55b5775f89-psvr8\" (UID: \"32508b48-f159-4a5a-898d-3d5ad26ac7c9\") " pod="openshift-route-controller-manager/route-controller-manager-55b5775f89-psvr8" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.726781 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32508b48-f159-4a5a-898d-3d5ad26ac7c9-serving-cert\") pod \"route-controller-manager-55b5775f89-psvr8\" (UID: \"32508b48-f159-4a5a-898d-3d5ad26ac7c9\") " pod="openshift-route-controller-manager/route-controller-manager-55b5775f89-psvr8" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.738727 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22ghk\" (UniqueName: \"kubernetes.io/projected/32508b48-f159-4a5a-898d-3d5ad26ac7c9-kube-api-access-22ghk\") pod \"route-controller-manager-55b5775f89-psvr8\" (UID: \"32508b48-f159-4a5a-898d-3d5ad26ac7c9\") " pod="openshift-route-controller-manager/route-controller-manager-55b5775f89-psvr8" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.891829 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55b5775f89-psvr8" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.922371 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0116d04c-adc4-4adc-ab03-21058672d6e8" path="/var/lib/kubelet/pods/0116d04c-adc4-4adc-ab03-21058672d6e8/volumes" Feb 03 00:20:40 crc kubenswrapper[4798]: I0203 00:20:40.923962 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46425fd6-9499-4e9a-8450-3fafbe2c6611" path="/var/lib/kubelet/pods/46425fd6-9499-4e9a-8450-3fafbe2c6611/volumes" Feb 03 00:20:41 crc kubenswrapper[4798]: I0203 00:20:41.079178 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55b5775f89-psvr8"] Feb 03 00:20:41 crc kubenswrapper[4798]: I0203 00:20:41.593438 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 03 00:20:41 crc kubenswrapper[4798]: I0203 00:20:41.692582 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 03 00:20:41 crc kubenswrapper[4798]: I0203 00:20:41.705446 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 03 00:20:41 crc kubenswrapper[4798]: I0203 00:20:41.708221 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfa4acc9-31b6-4612-a044-32f3b9acb689-serving-cert\") pod \"controller-manager-7d9d547c5-ps4r4\" (UID: \"dfa4acc9-31b6-4612-a044-32f3b9acb689\") " pod="openshift-controller-manager/controller-manager-7d9d547c5-ps4r4" Feb 03 00:20:41 crc kubenswrapper[4798]: E0203 00:20:41.721958 4798 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Feb 03 00:20:41 crc kubenswrapper[4798]: E0203 00:20:41.722043 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dfa4acc9-31b6-4612-a044-32f3b9acb689-config podName:dfa4acc9-31b6-4612-a044-32f3b9acb689 nodeName:}" failed. No retries permitted until 2026-02-03 00:20:42.222023764 +0000 UTC m=+333.988013765 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/dfa4acc9-31b6-4612-a044-32f3b9acb689-config") pod "controller-manager-7d9d547c5-ps4r4" (UID: "dfa4acc9-31b6-4612-a044-32f3b9acb689") : failed to sync configmap cache: timed out waiting for the condition Feb 03 00:20:41 crc kubenswrapper[4798]: E0203 00:20:41.722280 4798 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 03 00:20:41 crc kubenswrapper[4798]: E0203 00:20:41.722321 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dfa4acc9-31b6-4612-a044-32f3b9acb689-client-ca podName:dfa4acc9-31b6-4612-a044-32f3b9acb689 nodeName:}" failed. No retries permitted until 2026-02-03 00:20:42.222307612 +0000 UTC m=+333.988297623 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/dfa4acc9-31b6-4612-a044-32f3b9acb689-client-ca") pod "controller-manager-7d9d547c5-ps4r4" (UID: "dfa4acc9-31b6-4612-a044-32f3b9acb689") : failed to sync configmap cache: timed out waiting for the condition Feb 03 00:20:41 crc kubenswrapper[4798]: E0203 00:20:41.722343 4798 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Feb 03 00:20:41 crc kubenswrapper[4798]: E0203 00:20:41.722361 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dfa4acc9-31b6-4612-a044-32f3b9acb689-proxy-ca-bundles podName:dfa4acc9-31b6-4612-a044-32f3b9acb689 nodeName:}" failed. No retries permitted until 2026-02-03 00:20:42.222354603 +0000 UTC m=+333.988344614 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/dfa4acc9-31b6-4612-a044-32f3b9acb689-proxy-ca-bundles") pod "controller-manager-7d9d547c5-ps4r4" (UID: "dfa4acc9-31b6-4612-a044-32f3b9acb689") : failed to sync configmap cache: timed out waiting for the condition Feb 03 00:20:41 crc kubenswrapper[4798]: I0203 00:20:41.749051 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 03 00:20:41 crc kubenswrapper[4798]: I0203 00:20:41.776393 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 03 00:20:41 crc kubenswrapper[4798]: I0203 00:20:41.838067 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55b5775f89-psvr8" event={"ID":"32508b48-f159-4a5a-898d-3d5ad26ac7c9","Type":"ContainerStarted","Data":"de76936386d38f821cae5939b725070c52c4ef99efde7d334a30c0b5bd4d98e4"} Feb 03 00:20:41 crc kubenswrapper[4798]: I0203 00:20:41.838146 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55b5775f89-psvr8" event={"ID":"32508b48-f159-4a5a-898d-3d5ad26ac7c9","Type":"ContainerStarted","Data":"0d97a43be6e7d9f23c133c4606ded941dd10e314bc70881aa4dd3cd71cd2cc0d"} Feb 03 00:20:41 crc kubenswrapper[4798]: I0203 00:20:41.838366 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-55b5775f89-psvr8" Feb 03 00:20:41 crc kubenswrapper[4798]: I0203 00:20:41.853973 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-55b5775f89-psvr8" podStartSLOduration=2.85395557 podStartE2EDuration="2.85395557s" podCreationTimestamp="2026-02-03 00:20:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:20:41.852718595 +0000 UTC m=+333.618708606" watchObservedRunningTime="2026-02-03 00:20:41.85395557 +0000 UTC m=+333.619945581" Feb 03 00:20:41 crc kubenswrapper[4798]: I0203 00:20:41.881164 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 03 00:20:42 crc kubenswrapper[4798]: I0203 00:20:42.146404 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 03 00:20:42 crc kubenswrapper[4798]: I0203 00:20:42.160888 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhpts\" (UniqueName: \"kubernetes.io/projected/dfa4acc9-31b6-4612-a044-32f3b9acb689-kube-api-access-fhpts\") pod \"controller-manager-7d9d547c5-ps4r4\" (UID: \"dfa4acc9-31b6-4612-a044-32f3b9acb689\") " pod="openshift-controller-manager/controller-manager-7d9d547c5-ps4r4" Feb 03 00:20:42 crc kubenswrapper[4798]: I0203 00:20:42.200910 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-55b5775f89-psvr8" Feb 03 00:20:42 crc kubenswrapper[4798]: I0203 00:20:42.240857 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dfa4acc9-31b6-4612-a044-32f3b9acb689-proxy-ca-bundles\") pod \"controller-manager-7d9d547c5-ps4r4\" (UID: \"dfa4acc9-31b6-4612-a044-32f3b9acb689\") " pod="openshift-controller-manager/controller-manager-7d9d547c5-ps4r4" Feb 03 00:20:42 crc kubenswrapper[4798]: I0203 00:20:42.240953 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dfa4acc9-31b6-4612-a044-32f3b9acb689-client-ca\") pod \"controller-manager-7d9d547c5-ps4r4\" (UID: \"dfa4acc9-31b6-4612-a044-32f3b9acb689\") " pod="openshift-controller-manager/controller-manager-7d9d547c5-ps4r4" Feb 03 00:20:42 crc kubenswrapper[4798]: I0203 00:20:42.242128 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dfa4acc9-31b6-4612-a044-32f3b9acb689-client-ca\") pod \"controller-manager-7d9d547c5-ps4r4\" (UID: \"dfa4acc9-31b6-4612-a044-32f3b9acb689\") " pod="openshift-controller-manager/controller-manager-7d9d547c5-ps4r4" Feb 03 00:20:42 crc kubenswrapper[4798]: I0203 00:20:42.242191 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfa4acc9-31b6-4612-a044-32f3b9acb689-config\") pod \"controller-manager-7d9d547c5-ps4r4\" (UID: \"dfa4acc9-31b6-4612-a044-32f3b9acb689\") " pod="openshift-controller-manager/controller-manager-7d9d547c5-ps4r4" Feb 03 00:20:42 crc kubenswrapper[4798]: I0203 00:20:42.242472 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dfa4acc9-31b6-4612-a044-32f3b9acb689-proxy-ca-bundles\") pod \"controller-manager-7d9d547c5-ps4r4\" (UID: \"dfa4acc9-31b6-4612-a044-32f3b9acb689\") " pod="openshift-controller-manager/controller-manager-7d9d547c5-ps4r4" Feb 03 00:20:42 crc kubenswrapper[4798]: I0203 00:20:42.242625 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfa4acc9-31b6-4612-a044-32f3b9acb689-config\") pod \"controller-manager-7d9d547c5-ps4r4\" (UID: \"dfa4acc9-31b6-4612-a044-32f3b9acb689\") " pod="openshift-controller-manager/controller-manager-7d9d547c5-ps4r4" Feb 03 00:20:42 crc kubenswrapper[4798]: I0203 00:20:42.383508 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d9d547c5-ps4r4" Feb 03 00:20:42 crc kubenswrapper[4798]: I0203 00:20:42.763822 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d9d547c5-ps4r4"] Feb 03 00:20:42 crc kubenswrapper[4798]: I0203 00:20:42.843590 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d9d547c5-ps4r4" event={"ID":"dfa4acc9-31b6-4612-a044-32f3b9acb689","Type":"ContainerStarted","Data":"a45db8e74b6d14f0f57a88a27944fdd827316ea5ec4cf05d4f61e35abf758452"} Feb 03 00:20:43 crc kubenswrapper[4798]: I0203 00:20:43.850210 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d9d547c5-ps4r4" event={"ID":"dfa4acc9-31b6-4612-a044-32f3b9acb689","Type":"ContainerStarted","Data":"dc603c046e756f1cb75509cb06432af28ea941859b3d0bd8f30b822e134fda2a"} Feb 03 00:20:43 crc kubenswrapper[4798]: I0203 00:20:43.850583 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7d9d547c5-ps4r4" Feb 03 00:20:43 crc kubenswrapper[4798]: I0203 00:20:43.856110 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7d9d547c5-ps4r4" Feb 03 00:20:43 crc kubenswrapper[4798]: I0203 00:20:43.876164 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7d9d547c5-ps4r4" podStartSLOduration=4.876130717 podStartE2EDuration="4.876130717s" podCreationTimestamp="2026-02-03 00:20:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:20:43.872505434 +0000 UTC m=+335.638495445" watchObservedRunningTime="2026-02-03 00:20:43.876130717 +0000 UTC m=+335.642120728" Feb 03 00:20:58 crc kubenswrapper[4798]: I0203 00:20:58.760282 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-r7dmz"] Feb 03 00:20:58 crc kubenswrapper[4798]: I0203 00:20:58.761984 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-r7dmz" Feb 03 00:20:58 crc kubenswrapper[4798]: I0203 00:20:58.776060 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-r7dmz"] Feb 03 00:20:58 crc kubenswrapper[4798]: I0203 00:20:58.954738 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1f9f9892-d25a-419e-98f4-3f8f789697ea-ca-trust-extracted\") pod \"image-registry-66df7c8f76-r7dmz\" (UID: \"1f9f9892-d25a-419e-98f4-3f8f789697ea\") " pod="openshift-image-registry/image-registry-66df7c8f76-r7dmz" Feb 03 00:20:58 crc kubenswrapper[4798]: I0203 00:20:58.954782 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvv7w\" (UniqueName: \"kubernetes.io/projected/1f9f9892-d25a-419e-98f4-3f8f789697ea-kube-api-access-bvv7w\") pod \"image-registry-66df7c8f76-r7dmz\" (UID: \"1f9f9892-d25a-419e-98f4-3f8f789697ea\") " pod="openshift-image-registry/image-registry-66df7c8f76-r7dmz" Feb 03 00:20:58 crc kubenswrapper[4798]: I0203 00:20:58.954809 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1f9f9892-d25a-419e-98f4-3f8f789697ea-registry-certificates\") pod \"image-registry-66df7c8f76-r7dmz\" (UID: \"1f9f9892-d25a-419e-98f4-3f8f789697ea\") " pod="openshift-image-registry/image-registry-66df7c8f76-r7dmz" Feb 03 00:20:58 crc kubenswrapper[4798]: I0203 00:20:58.954825 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1f9f9892-d25a-419e-98f4-3f8f789697ea-trusted-ca\") pod \"image-registry-66df7c8f76-r7dmz\" (UID: \"1f9f9892-d25a-419e-98f4-3f8f789697ea\") " pod="openshift-image-registry/image-registry-66df7c8f76-r7dmz" Feb 03 00:20:58 crc kubenswrapper[4798]: I0203 00:20:58.954850 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1f9f9892-d25a-419e-98f4-3f8f789697ea-bound-sa-token\") pod \"image-registry-66df7c8f76-r7dmz\" (UID: \"1f9f9892-d25a-419e-98f4-3f8f789697ea\") " pod="openshift-image-registry/image-registry-66df7c8f76-r7dmz" Feb 03 00:20:58 crc kubenswrapper[4798]: I0203 00:20:58.954963 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1f9f9892-d25a-419e-98f4-3f8f789697ea-registry-tls\") pod \"image-registry-66df7c8f76-r7dmz\" (UID: \"1f9f9892-d25a-419e-98f4-3f8f789697ea\") " pod="openshift-image-registry/image-registry-66df7c8f76-r7dmz" Feb 03 00:20:58 crc kubenswrapper[4798]: I0203 00:20:58.955075 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-r7dmz\" (UID: \"1f9f9892-d25a-419e-98f4-3f8f789697ea\") " pod="openshift-image-registry/image-registry-66df7c8f76-r7dmz" Feb 03 00:20:58 crc kubenswrapper[4798]: I0203 00:20:58.955255 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1f9f9892-d25a-419e-98f4-3f8f789697ea-installation-pull-secrets\") pod \"image-registry-66df7c8f76-r7dmz\" (UID: \"1f9f9892-d25a-419e-98f4-3f8f789697ea\") " pod="openshift-image-registry/image-registry-66df7c8f76-r7dmz" Feb 03 00:20:58 crc kubenswrapper[4798]: I0203 00:20:58.979322 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-r7dmz\" (UID: \"1f9f9892-d25a-419e-98f4-3f8f789697ea\") " pod="openshift-image-registry/image-registry-66df7c8f76-r7dmz" Feb 03 00:20:59 crc kubenswrapper[4798]: I0203 00:20:59.057182 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1f9f9892-d25a-419e-98f4-3f8f789697ea-ca-trust-extracted\") pod \"image-registry-66df7c8f76-r7dmz\" (UID: \"1f9f9892-d25a-419e-98f4-3f8f789697ea\") " pod="openshift-image-registry/image-registry-66df7c8f76-r7dmz" Feb 03 00:20:59 crc kubenswrapper[4798]: I0203 00:20:59.057226 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvv7w\" (UniqueName: \"kubernetes.io/projected/1f9f9892-d25a-419e-98f4-3f8f789697ea-kube-api-access-bvv7w\") pod \"image-registry-66df7c8f76-r7dmz\" (UID: \"1f9f9892-d25a-419e-98f4-3f8f789697ea\") " pod="openshift-image-registry/image-registry-66df7c8f76-r7dmz" Feb 03 00:20:59 crc kubenswrapper[4798]: I0203 00:20:59.057256 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1f9f9892-d25a-419e-98f4-3f8f789697ea-registry-certificates\") pod \"image-registry-66df7c8f76-r7dmz\" (UID: \"1f9f9892-d25a-419e-98f4-3f8f789697ea\") " pod="openshift-image-registry/image-registry-66df7c8f76-r7dmz" Feb 03 00:20:59 crc kubenswrapper[4798]: I0203 00:20:59.057278 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1f9f9892-d25a-419e-98f4-3f8f789697ea-trusted-ca\") pod \"image-registry-66df7c8f76-r7dmz\" (UID: \"1f9f9892-d25a-419e-98f4-3f8f789697ea\") " pod="openshift-image-registry/image-registry-66df7c8f76-r7dmz" Feb 03 00:20:59 crc kubenswrapper[4798]: I0203 00:20:59.057307 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1f9f9892-d25a-419e-98f4-3f8f789697ea-bound-sa-token\") pod \"image-registry-66df7c8f76-r7dmz\" (UID: \"1f9f9892-d25a-419e-98f4-3f8f789697ea\") " pod="openshift-image-registry/image-registry-66df7c8f76-r7dmz" Feb 03 00:20:59 crc kubenswrapper[4798]: I0203 00:20:59.057332 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1f9f9892-d25a-419e-98f4-3f8f789697ea-registry-tls\") pod \"image-registry-66df7c8f76-r7dmz\" (UID: \"1f9f9892-d25a-419e-98f4-3f8f789697ea\") " pod="openshift-image-registry/image-registry-66df7c8f76-r7dmz" Feb 03 00:20:59 crc kubenswrapper[4798]: I0203 00:20:59.057386 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1f9f9892-d25a-419e-98f4-3f8f789697ea-installation-pull-secrets\") pod \"image-registry-66df7c8f76-r7dmz\" (UID: \"1f9f9892-d25a-419e-98f4-3f8f789697ea\") " pod="openshift-image-registry/image-registry-66df7c8f76-r7dmz" Feb 03 00:20:59 crc kubenswrapper[4798]: I0203 00:20:59.058055 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1f9f9892-d25a-419e-98f4-3f8f789697ea-ca-trust-extracted\") pod \"image-registry-66df7c8f76-r7dmz\" (UID: \"1f9f9892-d25a-419e-98f4-3f8f789697ea\") " pod="openshift-image-registry/image-registry-66df7c8f76-r7dmz" Feb 03 00:20:59 crc kubenswrapper[4798]: I0203 00:20:59.058585 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1f9f9892-d25a-419e-98f4-3f8f789697ea-trusted-ca\") pod \"image-registry-66df7c8f76-r7dmz\" (UID: \"1f9f9892-d25a-419e-98f4-3f8f789697ea\") " pod="openshift-image-registry/image-registry-66df7c8f76-r7dmz" Feb 03 00:20:59 crc kubenswrapper[4798]: I0203 00:20:59.058750 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1f9f9892-d25a-419e-98f4-3f8f789697ea-registry-certificates\") pod \"image-registry-66df7c8f76-r7dmz\" (UID: \"1f9f9892-d25a-419e-98f4-3f8f789697ea\") " pod="openshift-image-registry/image-registry-66df7c8f76-r7dmz" Feb 03 00:20:59 crc kubenswrapper[4798]: I0203 00:20:59.062823 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1f9f9892-d25a-419e-98f4-3f8f789697ea-installation-pull-secrets\") pod \"image-registry-66df7c8f76-r7dmz\" (UID: \"1f9f9892-d25a-419e-98f4-3f8f789697ea\") " pod="openshift-image-registry/image-registry-66df7c8f76-r7dmz" Feb 03 00:20:59 crc kubenswrapper[4798]: I0203 00:20:59.063874 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1f9f9892-d25a-419e-98f4-3f8f789697ea-registry-tls\") pod \"image-registry-66df7c8f76-r7dmz\" (UID: \"1f9f9892-d25a-419e-98f4-3f8f789697ea\") " pod="openshift-image-registry/image-registry-66df7c8f76-r7dmz" Feb 03 00:20:59 crc kubenswrapper[4798]: I0203 00:20:59.073065 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvv7w\" (UniqueName: \"kubernetes.io/projected/1f9f9892-d25a-419e-98f4-3f8f789697ea-kube-api-access-bvv7w\") pod \"image-registry-66df7c8f76-r7dmz\" (UID: \"1f9f9892-d25a-419e-98f4-3f8f789697ea\") " pod="openshift-image-registry/image-registry-66df7c8f76-r7dmz" Feb 03 00:20:59 crc kubenswrapper[4798]: I0203 00:20:59.080498 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1f9f9892-d25a-419e-98f4-3f8f789697ea-bound-sa-token\") pod \"image-registry-66df7c8f76-r7dmz\" (UID: \"1f9f9892-d25a-419e-98f4-3f8f789697ea\") " pod="openshift-image-registry/image-registry-66df7c8f76-r7dmz" Feb 03 00:20:59 crc kubenswrapper[4798]: I0203 00:20:59.223398 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-r7dmz" Feb 03 00:20:59 crc kubenswrapper[4798]: I0203 00:20:59.622963 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-r7dmz"] Feb 03 00:20:59 crc kubenswrapper[4798]: W0203 00:20:59.633487 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f9f9892_d25a_419e_98f4_3f8f789697ea.slice/crio-76d78f4eb8244f9f71318b5cae4bfb3497550fccf6d34a45d524b59607548758 WatchSource:0}: Error finding container 76d78f4eb8244f9f71318b5cae4bfb3497550fccf6d34a45d524b59607548758: Status 404 returned error can't find the container with id 76d78f4eb8244f9f71318b5cae4bfb3497550fccf6d34a45d524b59607548758 Feb 03 00:20:59 crc kubenswrapper[4798]: I0203 00:20:59.962679 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-r7dmz" event={"ID":"1f9f9892-d25a-419e-98f4-3f8f789697ea","Type":"ContainerStarted","Data":"a96d3d3b5530fd7122f283a3f823066dc986351d0a9f045c22e4ce8b2a5308c5"} Feb 03 00:20:59 crc kubenswrapper[4798]: I0203 00:20:59.963052 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-r7dmz" Feb 03 00:20:59 crc kubenswrapper[4798]: I0203 00:20:59.963065 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-r7dmz" event={"ID":"1f9f9892-d25a-419e-98f4-3f8f789697ea","Type":"ContainerStarted","Data":"76d78f4eb8244f9f71318b5cae4bfb3497550fccf6d34a45d524b59607548758"} Feb 03 00:20:59 crc kubenswrapper[4798]: I0203 00:20:59.982590 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-r7dmz" podStartSLOduration=1.982576307 podStartE2EDuration="1.982576307s" podCreationTimestamp="2026-02-03 00:20:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:20:59.980305552 +0000 UTC m=+351.746295563" watchObservedRunningTime="2026-02-03 00:20:59.982576307 +0000 UTC m=+351.748566318" Feb 03 00:21:01 crc kubenswrapper[4798]: I0203 00:21:01.643292 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j4fw8"] Feb 03 00:21:01 crc kubenswrapper[4798]: I0203 00:21:01.643825 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-j4fw8" podUID="61992978-85f4-4395-b65a-d5efe47c79d8" containerName="registry-server" containerID="cri-o://84b76f12475ef340e1341b6bc128c16c887f1e53c05a3271f53f965ea98700c4" gracePeriod=30 Feb 03 00:21:01 crc kubenswrapper[4798]: I0203 00:21:01.655115 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9k8gx"] Feb 03 00:21:01 crc kubenswrapper[4798]: I0203 00:21:01.655364 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9k8gx" podUID="3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d" containerName="registry-server" containerID="cri-o://a0696853f0e764005e786824c7504b60ff7ddb48d862d9a2e237fe713d981c83" gracePeriod=30 Feb 03 00:21:01 crc kubenswrapper[4798]: I0203 00:21:01.675152 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9xz4k"] Feb 03 00:21:01 crc kubenswrapper[4798]: I0203 00:21:01.675454 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-9xz4k" podUID="81abe77a-25d5-4fe5-a592-e83853be1b63" containerName="marketplace-operator" containerID="cri-o://3b2c631006ee33d12528d3284efcc852a88be073a7a51aa6eb4801b19d7cbf8b" gracePeriod=30 Feb 03 00:21:01 crc kubenswrapper[4798]: I0203 00:21:01.681604 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kx75m"] Feb 03 00:21:01 crc kubenswrapper[4798]: I0203 00:21:01.681968 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kx75m" podUID="e128d005-b5e0-4da0-b122-3f69a1751d1f" containerName="registry-server" containerID="cri-o://925ea9be00f1cdd010832a1f2f16bb598a6e8f4fbe4a4ca6d7b63632189477c9" gracePeriod=30 Feb 03 00:21:01 crc kubenswrapper[4798]: I0203 00:21:01.685524 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lp6cb"] Feb 03 00:21:01 crc kubenswrapper[4798]: I0203 00:21:01.685811 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lp6cb" podUID="1fafb100-14c8-437f-b5ac-4264b4cbef55" containerName="registry-server" containerID="cri-o://619f1a8e0967331f4c0822d14aa6f0d3a32795f39c92782e3fc64a33ca642eee" gracePeriod=30 Feb 03 00:21:01 crc kubenswrapper[4798]: I0203 00:21:01.696197 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2zlmb"] Feb 03 00:21:01 crc kubenswrapper[4798]: I0203 00:21:01.696869 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2zlmb" Feb 03 00:21:01 crc kubenswrapper[4798]: I0203 00:21:01.710357 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2zlmb"] Feb 03 00:21:01 crc kubenswrapper[4798]: I0203 00:21:01.811239 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f6708d80-8899-4ef9-a0aa-0acb736f01ed-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2zlmb\" (UID: \"f6708d80-8899-4ef9-a0aa-0acb736f01ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-2zlmb" Feb 03 00:21:01 crc kubenswrapper[4798]: I0203 00:21:01.811478 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrt5r\" (UniqueName: \"kubernetes.io/projected/f6708d80-8899-4ef9-a0aa-0acb736f01ed-kube-api-access-jrt5r\") pod \"marketplace-operator-79b997595-2zlmb\" (UID: \"f6708d80-8899-4ef9-a0aa-0acb736f01ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-2zlmb" Feb 03 00:21:01 crc kubenswrapper[4798]: I0203 00:21:01.811540 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f6708d80-8899-4ef9-a0aa-0acb736f01ed-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2zlmb\" (UID: \"f6708d80-8899-4ef9-a0aa-0acb736f01ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-2zlmb" Feb 03 00:21:01 crc kubenswrapper[4798]: I0203 00:21:01.913360 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrt5r\" (UniqueName: \"kubernetes.io/projected/f6708d80-8899-4ef9-a0aa-0acb736f01ed-kube-api-access-jrt5r\") pod \"marketplace-operator-79b997595-2zlmb\" (UID: \"f6708d80-8899-4ef9-a0aa-0acb736f01ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-2zlmb" Feb 03 00:21:01 crc kubenswrapper[4798]: I0203 00:21:01.913725 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f6708d80-8899-4ef9-a0aa-0acb736f01ed-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2zlmb\" (UID: \"f6708d80-8899-4ef9-a0aa-0acb736f01ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-2zlmb" Feb 03 00:21:01 crc kubenswrapper[4798]: I0203 00:21:01.913821 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f6708d80-8899-4ef9-a0aa-0acb736f01ed-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2zlmb\" (UID: \"f6708d80-8899-4ef9-a0aa-0acb736f01ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-2zlmb" Feb 03 00:21:01 crc kubenswrapper[4798]: I0203 00:21:01.915302 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f6708d80-8899-4ef9-a0aa-0acb736f01ed-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2zlmb\" (UID: \"f6708d80-8899-4ef9-a0aa-0acb736f01ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-2zlmb" Feb 03 00:21:01 crc kubenswrapper[4798]: I0203 00:21:01.919588 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f6708d80-8899-4ef9-a0aa-0acb736f01ed-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2zlmb\" (UID: \"f6708d80-8899-4ef9-a0aa-0acb736f01ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-2zlmb" Feb 03 00:21:01 crc kubenswrapper[4798]: I0203 00:21:01.931271 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrt5r\" (UniqueName: \"kubernetes.io/projected/f6708d80-8899-4ef9-a0aa-0acb736f01ed-kube-api-access-jrt5r\") pod \"marketplace-operator-79b997595-2zlmb\" (UID: \"f6708d80-8899-4ef9-a0aa-0acb736f01ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-2zlmb" Feb 03 00:21:02 crc kubenswrapper[4798]: I0203 00:21:02.020555 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2zlmb" Feb 03 00:21:02 crc kubenswrapper[4798]: I0203 00:21:02.461150 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2zlmb"] Feb 03 00:21:02 crc kubenswrapper[4798]: W0203 00:21:02.475823 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6708d80_8899_4ef9_a0aa_0acb736f01ed.slice/crio-a49c5172dfc56158abcf9c8960e362ee131089e67e3be22b78c816042ec25045 WatchSource:0}: Error finding container a49c5172dfc56158abcf9c8960e362ee131089e67e3be22b78c816042ec25045: Status 404 returned error can't find the container with id a49c5172dfc56158abcf9c8960e362ee131089e67e3be22b78c816042ec25045 Feb 03 00:21:02 crc kubenswrapper[4798]: I0203 00:21:02.718070 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lp6cb" Feb 03 00:21:02 crc kubenswrapper[4798]: I0203 00:21:02.828202 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzj6f\" (UniqueName: \"kubernetes.io/projected/1fafb100-14c8-437f-b5ac-4264b4cbef55-kube-api-access-zzj6f\") pod \"1fafb100-14c8-437f-b5ac-4264b4cbef55\" (UID: \"1fafb100-14c8-437f-b5ac-4264b4cbef55\") " Feb 03 00:21:02 crc kubenswrapper[4798]: I0203 00:21:02.828576 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fafb100-14c8-437f-b5ac-4264b4cbef55-catalog-content\") pod \"1fafb100-14c8-437f-b5ac-4264b4cbef55\" (UID: \"1fafb100-14c8-437f-b5ac-4264b4cbef55\") " Feb 03 00:21:02 crc kubenswrapper[4798]: I0203 00:21:02.828627 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fafb100-14c8-437f-b5ac-4264b4cbef55-utilities\") pod \"1fafb100-14c8-437f-b5ac-4264b4cbef55\" (UID: \"1fafb100-14c8-437f-b5ac-4264b4cbef55\") " Feb 03 00:21:02 crc kubenswrapper[4798]: I0203 00:21:02.829860 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fafb100-14c8-437f-b5ac-4264b4cbef55-utilities" (OuterVolumeSpecName: "utilities") pod "1fafb100-14c8-437f-b5ac-4264b4cbef55" (UID: "1fafb100-14c8-437f-b5ac-4264b4cbef55"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:21:02 crc kubenswrapper[4798]: I0203 00:21:02.835791 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fafb100-14c8-437f-b5ac-4264b4cbef55-kube-api-access-zzj6f" (OuterVolumeSpecName: "kube-api-access-zzj6f") pod "1fafb100-14c8-437f-b5ac-4264b4cbef55" (UID: "1fafb100-14c8-437f-b5ac-4264b4cbef55"). InnerVolumeSpecName "kube-api-access-zzj6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:21:02 crc kubenswrapper[4798]: I0203 00:21:02.894729 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j4fw8" Feb 03 00:21:02 crc kubenswrapper[4798]: I0203 00:21:02.929742 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzj6f\" (UniqueName: \"kubernetes.io/projected/1fafb100-14c8-437f-b5ac-4264b4cbef55-kube-api-access-zzj6f\") on node \"crc\" DevicePath \"\"" Feb 03 00:21:02 crc kubenswrapper[4798]: I0203 00:21:02.929771 4798 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fafb100-14c8-437f-b5ac-4264b4cbef55-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 00:21:02 crc kubenswrapper[4798]: I0203 00:21:02.959035 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9k8gx" Feb 03 00:21:02 crc kubenswrapper[4798]: I0203 00:21:02.964120 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9xz4k" Feb 03 00:21:02 crc kubenswrapper[4798]: I0203 00:21:02.973157 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kx75m" Feb 03 00:21:02 crc kubenswrapper[4798]: I0203 00:21:02.983238 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2zlmb" event={"ID":"f6708d80-8899-4ef9-a0aa-0acb736f01ed","Type":"ContainerStarted","Data":"a49c5172dfc56158abcf9c8960e362ee131089e67e3be22b78c816042ec25045"} Feb 03 00:21:02 crc kubenswrapper[4798]: I0203 00:21:02.987463 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fafb100-14c8-437f-b5ac-4264b4cbef55-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1fafb100-14c8-437f-b5ac-4264b4cbef55" (UID: "1fafb100-14c8-437f-b5ac-4264b4cbef55"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:21:02 crc kubenswrapper[4798]: I0203 00:21:02.991785 4798 generic.go:334] "Generic (PLEG): container finished" podID="e128d005-b5e0-4da0-b122-3f69a1751d1f" containerID="925ea9be00f1cdd010832a1f2f16bb598a6e8f4fbe4a4ca6d7b63632189477c9" exitCode=0 Feb 03 00:21:02 crc kubenswrapper[4798]: I0203 00:21:02.991909 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kx75m" event={"ID":"e128d005-b5e0-4da0-b122-3f69a1751d1f","Type":"ContainerDied","Data":"925ea9be00f1cdd010832a1f2f16bb598a6e8f4fbe4a4ca6d7b63632189477c9"} Feb 03 00:21:02 crc kubenswrapper[4798]: I0203 00:21:02.991944 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kx75m" event={"ID":"e128d005-b5e0-4da0-b122-3f69a1751d1f","Type":"ContainerDied","Data":"d69179891d7daf30e66f00ddaff76d07681ae7681d087e7d34c19d9995c99321"} Feb 03 00:21:02 crc kubenswrapper[4798]: I0203 00:21:02.991961 4798 scope.go:117] "RemoveContainer" containerID="925ea9be00f1cdd010832a1f2f16bb598a6e8f4fbe4a4ca6d7b63632189477c9" Feb 03 00:21:02 crc kubenswrapper[4798]: I0203 00:21:02.992110 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kx75m" Feb 03 00:21:02 crc kubenswrapper[4798]: I0203 00:21:02.996176 4798 generic.go:334] "Generic (PLEG): container finished" podID="3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d" containerID="a0696853f0e764005e786824c7504b60ff7ddb48d862d9a2e237fe713d981c83" exitCode=0 Feb 03 00:21:02 crc kubenswrapper[4798]: I0203 00:21:02.996245 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9k8gx" Feb 03 00:21:02 crc kubenswrapper[4798]: I0203 00:21:02.996258 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9k8gx" event={"ID":"3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d","Type":"ContainerDied","Data":"a0696853f0e764005e786824c7504b60ff7ddb48d862d9a2e237fe713d981c83"} Feb 03 00:21:02 crc kubenswrapper[4798]: I0203 00:21:02.996288 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9k8gx" event={"ID":"3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d","Type":"ContainerDied","Data":"e57d6801db1e20a2cae8ec8b07e1010ebc19ce9e51ed157dbc56ac05ae7fd8a4"} Feb 03 00:21:02 crc kubenswrapper[4798]: I0203 00:21:02.997505 4798 generic.go:334] "Generic (PLEG): container finished" podID="81abe77a-25d5-4fe5-a592-e83853be1b63" containerID="3b2c631006ee33d12528d3284efcc852a88be073a7a51aa6eb4801b19d7cbf8b" exitCode=0 Feb 03 00:21:02 crc kubenswrapper[4798]: I0203 00:21:02.997556 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9xz4k" Feb 03 00:21:02 crc kubenswrapper[4798]: I0203 00:21:02.997571 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9xz4k" event={"ID":"81abe77a-25d5-4fe5-a592-e83853be1b63","Type":"ContainerDied","Data":"3b2c631006ee33d12528d3284efcc852a88be073a7a51aa6eb4801b19d7cbf8b"} Feb 03 00:21:02 crc kubenswrapper[4798]: I0203 00:21:02.997592 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9xz4k" event={"ID":"81abe77a-25d5-4fe5-a592-e83853be1b63","Type":"ContainerDied","Data":"bfd901c958dddde302b2795ebece6f55a38614ec439b4d9e587830ae1dd9d9bb"} Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.010769 4798 scope.go:117] "RemoveContainer" containerID="10b9a574d91d2aa78d0ee2390964dd4526e29437b3621aaf3516f78a374a1c3b" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.012898 4798 generic.go:334] "Generic (PLEG): container finished" podID="1fafb100-14c8-437f-b5ac-4264b4cbef55" containerID="619f1a8e0967331f4c0822d14aa6f0d3a32795f39c92782e3fc64a33ca642eee" exitCode=0 Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.012967 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lp6cb" event={"ID":"1fafb100-14c8-437f-b5ac-4264b4cbef55","Type":"ContainerDied","Data":"619f1a8e0967331f4c0822d14aa6f0d3a32795f39c92782e3fc64a33ca642eee"} Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.012996 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lp6cb" event={"ID":"1fafb100-14c8-437f-b5ac-4264b4cbef55","Type":"ContainerDied","Data":"5cb1d0b4b08968d80788267df7c97ca1ca7495d4654febec0fcf91eadacc569f"} Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.013061 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lp6cb" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.015665 4798 generic.go:334] "Generic (PLEG): container finished" podID="61992978-85f4-4395-b65a-d5efe47c79d8" containerID="84b76f12475ef340e1341b6bc128c16c887f1e53c05a3271f53f965ea98700c4" exitCode=0 Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.015709 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4fw8" event={"ID":"61992978-85f4-4395-b65a-d5efe47c79d8","Type":"ContainerDied","Data":"84b76f12475ef340e1341b6bc128c16c887f1e53c05a3271f53f965ea98700c4"} Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.015726 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j4fw8" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.015738 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4fw8" event={"ID":"61992978-85f4-4395-b65a-d5efe47c79d8","Type":"ContainerDied","Data":"9b5577557129f00532b80038b737462a6e3093eeb48235f03942a167a7d12c09"} Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.027214 4798 scope.go:117] "RemoveContainer" containerID="9846eb94387d924a22f3c7a3c7f1d6ebf40273d824327a5642d2d379ed6c83f0" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.030277 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61992978-85f4-4395-b65a-d5efe47c79d8-catalog-content\") pod \"61992978-85f4-4395-b65a-d5efe47c79d8\" (UID: \"61992978-85f4-4395-b65a-d5efe47c79d8\") " Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.030344 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5b8s\" (UniqueName: \"kubernetes.io/projected/61992978-85f4-4395-b65a-d5efe47c79d8-kube-api-access-d5b8s\") pod \"61992978-85f4-4395-b65a-d5efe47c79d8\" (UID: \"61992978-85f4-4395-b65a-d5efe47c79d8\") " Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.030377 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61992978-85f4-4395-b65a-d5efe47c79d8-utilities\") pod \"61992978-85f4-4395-b65a-d5efe47c79d8\" (UID: \"61992978-85f4-4395-b65a-d5efe47c79d8\") " Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.030685 4798 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fafb100-14c8-437f-b5ac-4264b4cbef55-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.033423 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61992978-85f4-4395-b65a-d5efe47c79d8-utilities" (OuterVolumeSpecName: "utilities") pod "61992978-85f4-4395-b65a-d5efe47c79d8" (UID: "61992978-85f4-4395-b65a-d5efe47c79d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.037979 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61992978-85f4-4395-b65a-d5efe47c79d8-kube-api-access-d5b8s" (OuterVolumeSpecName: "kube-api-access-d5b8s") pod "61992978-85f4-4395-b65a-d5efe47c79d8" (UID: "61992978-85f4-4395-b65a-d5efe47c79d8"). InnerVolumeSpecName "kube-api-access-d5b8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.045420 4798 scope.go:117] "RemoveContainer" containerID="925ea9be00f1cdd010832a1f2f16bb598a6e8f4fbe4a4ca6d7b63632189477c9" Feb 03 00:21:03 crc kubenswrapper[4798]: E0203 00:21:03.050815 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"925ea9be00f1cdd010832a1f2f16bb598a6e8f4fbe4a4ca6d7b63632189477c9\": container with ID starting with 925ea9be00f1cdd010832a1f2f16bb598a6e8f4fbe4a4ca6d7b63632189477c9 not found: ID does not exist" containerID="925ea9be00f1cdd010832a1f2f16bb598a6e8f4fbe4a4ca6d7b63632189477c9" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.050861 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"925ea9be00f1cdd010832a1f2f16bb598a6e8f4fbe4a4ca6d7b63632189477c9"} err="failed to get container status \"925ea9be00f1cdd010832a1f2f16bb598a6e8f4fbe4a4ca6d7b63632189477c9\": rpc error: code = NotFound desc = could not find container \"925ea9be00f1cdd010832a1f2f16bb598a6e8f4fbe4a4ca6d7b63632189477c9\": container with ID starting with 925ea9be00f1cdd010832a1f2f16bb598a6e8f4fbe4a4ca6d7b63632189477c9 not found: ID does not exist" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.050889 4798 scope.go:117] "RemoveContainer" containerID="10b9a574d91d2aa78d0ee2390964dd4526e29437b3621aaf3516f78a374a1c3b" Feb 03 00:21:03 crc kubenswrapper[4798]: E0203 00:21:03.053041 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10b9a574d91d2aa78d0ee2390964dd4526e29437b3621aaf3516f78a374a1c3b\": container with ID starting with 10b9a574d91d2aa78d0ee2390964dd4526e29437b3621aaf3516f78a374a1c3b not found: ID does not exist" containerID="10b9a574d91d2aa78d0ee2390964dd4526e29437b3621aaf3516f78a374a1c3b" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.053072 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10b9a574d91d2aa78d0ee2390964dd4526e29437b3621aaf3516f78a374a1c3b"} err="failed to get container status \"10b9a574d91d2aa78d0ee2390964dd4526e29437b3621aaf3516f78a374a1c3b\": rpc error: code = NotFound desc = could not find container \"10b9a574d91d2aa78d0ee2390964dd4526e29437b3621aaf3516f78a374a1c3b\": container with ID starting with 10b9a574d91d2aa78d0ee2390964dd4526e29437b3621aaf3516f78a374a1c3b not found: ID does not exist" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.053100 4798 scope.go:117] "RemoveContainer" containerID="9846eb94387d924a22f3c7a3c7f1d6ebf40273d824327a5642d2d379ed6c83f0" Feb 03 00:21:03 crc kubenswrapper[4798]: E0203 00:21:03.054439 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9846eb94387d924a22f3c7a3c7f1d6ebf40273d824327a5642d2d379ed6c83f0\": container with ID starting with 9846eb94387d924a22f3c7a3c7f1d6ebf40273d824327a5642d2d379ed6c83f0 not found: ID does not exist" containerID="9846eb94387d924a22f3c7a3c7f1d6ebf40273d824327a5642d2d379ed6c83f0" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.054467 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9846eb94387d924a22f3c7a3c7f1d6ebf40273d824327a5642d2d379ed6c83f0"} err="failed to get container status \"9846eb94387d924a22f3c7a3c7f1d6ebf40273d824327a5642d2d379ed6c83f0\": rpc error: code = NotFound desc = could not find container \"9846eb94387d924a22f3c7a3c7f1d6ebf40273d824327a5642d2d379ed6c83f0\": container with ID starting with 9846eb94387d924a22f3c7a3c7f1d6ebf40273d824327a5642d2d379ed6c83f0 not found: ID does not exist" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.054485 4798 scope.go:117] "RemoveContainer" containerID="a0696853f0e764005e786824c7504b60ff7ddb48d862d9a2e237fe713d981c83" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.072440 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lp6cb"] Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.073884 4798 scope.go:117] "RemoveContainer" containerID="8f2a14fa0e983089d994c1d2e7a94bab409d8eb67517b6def94c9fe4cde980a3" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.083137 4798 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lp6cb"] Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.104454 4798 scope.go:117] "RemoveContainer" containerID="1d1cf0407b5d9cfc787d221ea4a0fdf27f66e8a86caaf4a8bef88beff7540d89" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.118110 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61992978-85f4-4395-b65a-d5efe47c79d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61992978-85f4-4395-b65a-d5efe47c79d8" (UID: "61992978-85f4-4395-b65a-d5efe47c79d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.121316 4798 scope.go:117] "RemoveContainer" containerID="a0696853f0e764005e786824c7504b60ff7ddb48d862d9a2e237fe713d981c83" Feb 03 00:21:03 crc kubenswrapper[4798]: E0203 00:21:03.121833 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0696853f0e764005e786824c7504b60ff7ddb48d862d9a2e237fe713d981c83\": container with ID starting with a0696853f0e764005e786824c7504b60ff7ddb48d862d9a2e237fe713d981c83 not found: ID does not exist" containerID="a0696853f0e764005e786824c7504b60ff7ddb48d862d9a2e237fe713d981c83" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.121872 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0696853f0e764005e786824c7504b60ff7ddb48d862d9a2e237fe713d981c83"} err="failed to get container status \"a0696853f0e764005e786824c7504b60ff7ddb48d862d9a2e237fe713d981c83\": rpc error: code = NotFound desc = could not find container \"a0696853f0e764005e786824c7504b60ff7ddb48d862d9a2e237fe713d981c83\": container with ID starting with a0696853f0e764005e786824c7504b60ff7ddb48d862d9a2e237fe713d981c83 not found: ID does not exist" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.121905 4798 scope.go:117] "RemoveContainer" containerID="8f2a14fa0e983089d994c1d2e7a94bab409d8eb67517b6def94c9fe4cde980a3" Feb 03 00:21:03 crc kubenswrapper[4798]: E0203 00:21:03.122471 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f2a14fa0e983089d994c1d2e7a94bab409d8eb67517b6def94c9fe4cde980a3\": container with ID starting with 8f2a14fa0e983089d994c1d2e7a94bab409d8eb67517b6def94c9fe4cde980a3 not found: ID does not exist" containerID="8f2a14fa0e983089d994c1d2e7a94bab409d8eb67517b6def94c9fe4cde980a3" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.122506 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f2a14fa0e983089d994c1d2e7a94bab409d8eb67517b6def94c9fe4cde980a3"} err="failed to get container status \"8f2a14fa0e983089d994c1d2e7a94bab409d8eb67517b6def94c9fe4cde980a3\": rpc error: code = NotFound desc = could not find container \"8f2a14fa0e983089d994c1d2e7a94bab409d8eb67517b6def94c9fe4cde980a3\": container with ID starting with 8f2a14fa0e983089d994c1d2e7a94bab409d8eb67517b6def94c9fe4cde980a3 not found: ID does not exist" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.122526 4798 scope.go:117] "RemoveContainer" containerID="1d1cf0407b5d9cfc787d221ea4a0fdf27f66e8a86caaf4a8bef88beff7540d89" Feb 03 00:21:03 crc kubenswrapper[4798]: E0203 00:21:03.122823 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d1cf0407b5d9cfc787d221ea4a0fdf27f66e8a86caaf4a8bef88beff7540d89\": container with ID starting with 1d1cf0407b5d9cfc787d221ea4a0fdf27f66e8a86caaf4a8bef88beff7540d89 not found: ID does not exist" containerID="1d1cf0407b5d9cfc787d221ea4a0fdf27f66e8a86caaf4a8bef88beff7540d89" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.122852 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d1cf0407b5d9cfc787d221ea4a0fdf27f66e8a86caaf4a8bef88beff7540d89"} err="failed to get container status \"1d1cf0407b5d9cfc787d221ea4a0fdf27f66e8a86caaf4a8bef88beff7540d89\": rpc error: code = NotFound desc = could not find container \"1d1cf0407b5d9cfc787d221ea4a0fdf27f66e8a86caaf4a8bef88beff7540d89\": container with ID starting with 1d1cf0407b5d9cfc787d221ea4a0fdf27f66e8a86caaf4a8bef88beff7540d89 not found: ID does not exist" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.122873 4798 scope.go:117] "RemoveContainer" containerID="3b2c631006ee33d12528d3284efcc852a88be073a7a51aa6eb4801b19d7cbf8b" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.131959 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/81abe77a-25d5-4fe5-a592-e83853be1b63-marketplace-operator-metrics\") pod \"81abe77a-25d5-4fe5-a592-e83853be1b63\" (UID: \"81abe77a-25d5-4fe5-a592-e83853be1b63\") " Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.132061 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e128d005-b5e0-4da0-b122-3f69a1751d1f-utilities\") pod \"e128d005-b5e0-4da0-b122-3f69a1751d1f\" (UID: \"e128d005-b5e0-4da0-b122-3f69a1751d1f\") " Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.132104 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e128d005-b5e0-4da0-b122-3f69a1751d1f-catalog-content\") pod \"e128d005-b5e0-4da0-b122-3f69a1751d1f\" (UID: \"e128d005-b5e0-4da0-b122-3f69a1751d1f\") " Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.132151 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wd7rw\" (UniqueName: \"kubernetes.io/projected/81abe77a-25d5-4fe5-a592-e83853be1b63-kube-api-access-wd7rw\") pod \"81abe77a-25d5-4fe5-a592-e83853be1b63\" (UID: \"81abe77a-25d5-4fe5-a592-e83853be1b63\") " Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.132206 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/81abe77a-25d5-4fe5-a592-e83853be1b63-marketplace-trusted-ca\") pod \"81abe77a-25d5-4fe5-a592-e83853be1b63\" (UID: \"81abe77a-25d5-4fe5-a592-e83853be1b63\") " Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.132236 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d-catalog-content\") pod \"3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d\" (UID: \"3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d\") " Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.132260 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d-utilities\") pod \"3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d\" (UID: \"3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d\") " Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.132289 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn2vl\" (UniqueName: \"kubernetes.io/projected/e128d005-b5e0-4da0-b122-3f69a1751d1f-kube-api-access-gn2vl\") pod \"e128d005-b5e0-4da0-b122-3f69a1751d1f\" (UID: \"e128d005-b5e0-4da0-b122-3f69a1751d1f\") " Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.132322 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m29fj\" (UniqueName: \"kubernetes.io/projected/3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d-kube-api-access-m29fj\") pod \"3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d\" (UID: \"3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d\") " Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.132584 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5b8s\" (UniqueName: \"kubernetes.io/projected/61992978-85f4-4395-b65a-d5efe47c79d8-kube-api-access-d5b8s\") on node \"crc\" DevicePath \"\"" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.132610 4798 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61992978-85f4-4395-b65a-d5efe47c79d8-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.132626 4798 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61992978-85f4-4395-b65a-d5efe47c79d8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.133766 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81abe77a-25d5-4fe5-a592-e83853be1b63-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "81abe77a-25d5-4fe5-a592-e83853be1b63" (UID: "81abe77a-25d5-4fe5-a592-e83853be1b63"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.133905 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d-utilities" (OuterVolumeSpecName: "utilities") pod "3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d" (UID: "3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.134429 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e128d005-b5e0-4da0-b122-3f69a1751d1f-utilities" (OuterVolumeSpecName: "utilities") pod "e128d005-b5e0-4da0-b122-3f69a1751d1f" (UID: "e128d005-b5e0-4da0-b122-3f69a1751d1f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.137857 4798 scope.go:117] "RemoveContainer" containerID="3b2c631006ee33d12528d3284efcc852a88be073a7a51aa6eb4801b19d7cbf8b" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.139030 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e128d005-b5e0-4da0-b122-3f69a1751d1f-kube-api-access-gn2vl" (OuterVolumeSpecName: "kube-api-access-gn2vl") pod "e128d005-b5e0-4da0-b122-3f69a1751d1f" (UID: "e128d005-b5e0-4da0-b122-3f69a1751d1f"). InnerVolumeSpecName "kube-api-access-gn2vl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.139157 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81abe77a-25d5-4fe5-a592-e83853be1b63-kube-api-access-wd7rw" (OuterVolumeSpecName: "kube-api-access-wd7rw") pod "81abe77a-25d5-4fe5-a592-e83853be1b63" (UID: "81abe77a-25d5-4fe5-a592-e83853be1b63"). InnerVolumeSpecName "kube-api-access-wd7rw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.139260 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d-kube-api-access-m29fj" (OuterVolumeSpecName: "kube-api-access-m29fj") pod "3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d" (UID: "3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d"). InnerVolumeSpecName "kube-api-access-m29fj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:21:03 crc kubenswrapper[4798]: E0203 00:21:03.139260 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b2c631006ee33d12528d3284efcc852a88be073a7a51aa6eb4801b19d7cbf8b\": container with ID starting with 3b2c631006ee33d12528d3284efcc852a88be073a7a51aa6eb4801b19d7cbf8b not found: ID does not exist" containerID="3b2c631006ee33d12528d3284efcc852a88be073a7a51aa6eb4801b19d7cbf8b" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.139367 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b2c631006ee33d12528d3284efcc852a88be073a7a51aa6eb4801b19d7cbf8b"} err="failed to get container status \"3b2c631006ee33d12528d3284efcc852a88be073a7a51aa6eb4801b19d7cbf8b\": rpc error: code = NotFound desc = could not find container \"3b2c631006ee33d12528d3284efcc852a88be073a7a51aa6eb4801b19d7cbf8b\": container with ID starting with 3b2c631006ee33d12528d3284efcc852a88be073a7a51aa6eb4801b19d7cbf8b not found: ID does not exist" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.139431 4798 scope.go:117] "RemoveContainer" containerID="619f1a8e0967331f4c0822d14aa6f0d3a32795f39c92782e3fc64a33ca642eee" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.141180 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81abe77a-25d5-4fe5-a592-e83853be1b63-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "81abe77a-25d5-4fe5-a592-e83853be1b63" (UID: "81abe77a-25d5-4fe5-a592-e83853be1b63"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.156584 4798 scope.go:117] "RemoveContainer" containerID="4963794837af9af016441c53fabdfcfb88b2e28ad8ce20a66a1f2e0f7732c41d" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.163308 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e128d005-b5e0-4da0-b122-3f69a1751d1f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e128d005-b5e0-4da0-b122-3f69a1751d1f" (UID: "e128d005-b5e0-4da0-b122-3f69a1751d1f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.172720 4798 scope.go:117] "RemoveContainer" containerID="de982e199df10afe287521d4e31e9554a7dd6ebebdc20eb31c98a3c7e1566213" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.183682 4798 scope.go:117] "RemoveContainer" containerID="619f1a8e0967331f4c0822d14aa6f0d3a32795f39c92782e3fc64a33ca642eee" Feb 03 00:21:03 crc kubenswrapper[4798]: E0203 00:21:03.184238 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"619f1a8e0967331f4c0822d14aa6f0d3a32795f39c92782e3fc64a33ca642eee\": container with ID starting with 619f1a8e0967331f4c0822d14aa6f0d3a32795f39c92782e3fc64a33ca642eee not found: ID does not exist" containerID="619f1a8e0967331f4c0822d14aa6f0d3a32795f39c92782e3fc64a33ca642eee" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.184341 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"619f1a8e0967331f4c0822d14aa6f0d3a32795f39c92782e3fc64a33ca642eee"} err="failed to get container status \"619f1a8e0967331f4c0822d14aa6f0d3a32795f39c92782e3fc64a33ca642eee\": rpc error: code = NotFound desc = could not find container \"619f1a8e0967331f4c0822d14aa6f0d3a32795f39c92782e3fc64a33ca642eee\": container with ID starting with 619f1a8e0967331f4c0822d14aa6f0d3a32795f39c92782e3fc64a33ca642eee not found: ID does not exist" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.184435 4798 scope.go:117] "RemoveContainer" containerID="4963794837af9af016441c53fabdfcfb88b2e28ad8ce20a66a1f2e0f7732c41d" Feb 03 00:21:03 crc kubenswrapper[4798]: E0203 00:21:03.184857 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4963794837af9af016441c53fabdfcfb88b2e28ad8ce20a66a1f2e0f7732c41d\": container with ID starting with 4963794837af9af016441c53fabdfcfb88b2e28ad8ce20a66a1f2e0f7732c41d not found: ID does not exist" containerID="4963794837af9af016441c53fabdfcfb88b2e28ad8ce20a66a1f2e0f7732c41d" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.184970 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4963794837af9af016441c53fabdfcfb88b2e28ad8ce20a66a1f2e0f7732c41d"} err="failed to get container status \"4963794837af9af016441c53fabdfcfb88b2e28ad8ce20a66a1f2e0f7732c41d\": rpc error: code = NotFound desc = could not find container \"4963794837af9af016441c53fabdfcfb88b2e28ad8ce20a66a1f2e0f7732c41d\": container with ID starting with 4963794837af9af016441c53fabdfcfb88b2e28ad8ce20a66a1f2e0f7732c41d not found: ID does not exist" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.185058 4798 scope.go:117] "RemoveContainer" containerID="de982e199df10afe287521d4e31e9554a7dd6ebebdc20eb31c98a3c7e1566213" Feb 03 00:21:03 crc kubenswrapper[4798]: E0203 00:21:03.185377 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de982e199df10afe287521d4e31e9554a7dd6ebebdc20eb31c98a3c7e1566213\": container with ID starting with de982e199df10afe287521d4e31e9554a7dd6ebebdc20eb31c98a3c7e1566213 not found: ID does not exist" containerID="de982e199df10afe287521d4e31e9554a7dd6ebebdc20eb31c98a3c7e1566213" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.185479 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de982e199df10afe287521d4e31e9554a7dd6ebebdc20eb31c98a3c7e1566213"} err="failed to get container status \"de982e199df10afe287521d4e31e9554a7dd6ebebdc20eb31c98a3c7e1566213\": rpc error: code = NotFound desc = could not find container \"de982e199df10afe287521d4e31e9554a7dd6ebebdc20eb31c98a3c7e1566213\": container with ID starting with de982e199df10afe287521d4e31e9554a7dd6ebebdc20eb31c98a3c7e1566213 not found: ID does not exist" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.185559 4798 scope.go:117] "RemoveContainer" containerID="84b76f12475ef340e1341b6bc128c16c887f1e53c05a3271f53f965ea98700c4" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.189965 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d" (UID: "3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.200066 4798 scope.go:117] "RemoveContainer" containerID="945f287f32b651f6703ede87a33f4add9789d56f9d9de217a8b54c6372957867" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.218638 4798 scope.go:117] "RemoveContainer" containerID="643964eb28dbc6c77ee40bad868223d14c584ebc497de30687b4e892c7fd6d62" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.233639 4798 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/81abe77a-25d5-4fe5-a592-e83853be1b63-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.233696 4798 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e128d005-b5e0-4da0-b122-3f69a1751d1f-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.233705 4798 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e128d005-b5e0-4da0-b122-3f69a1751d1f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.233716 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wd7rw\" (UniqueName: \"kubernetes.io/projected/81abe77a-25d5-4fe5-a592-e83853be1b63-kube-api-access-wd7rw\") on node \"crc\" DevicePath \"\"" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.233724 4798 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/81abe77a-25d5-4fe5-a592-e83853be1b63-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.233734 4798 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.233741 4798 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.233749 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gn2vl\" (UniqueName: \"kubernetes.io/projected/e128d005-b5e0-4da0-b122-3f69a1751d1f-kube-api-access-gn2vl\") on node \"crc\" DevicePath \"\"" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.233757 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m29fj\" (UniqueName: \"kubernetes.io/projected/3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d-kube-api-access-m29fj\") on node \"crc\" DevicePath \"\"" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.235145 4798 scope.go:117] "RemoveContainer" containerID="84b76f12475ef340e1341b6bc128c16c887f1e53c05a3271f53f965ea98700c4" Feb 03 00:21:03 crc kubenswrapper[4798]: E0203 00:21:03.235690 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84b76f12475ef340e1341b6bc128c16c887f1e53c05a3271f53f965ea98700c4\": container with ID starting with 84b76f12475ef340e1341b6bc128c16c887f1e53c05a3271f53f965ea98700c4 not found: ID does not exist" containerID="84b76f12475ef340e1341b6bc128c16c887f1e53c05a3271f53f965ea98700c4" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.235718 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84b76f12475ef340e1341b6bc128c16c887f1e53c05a3271f53f965ea98700c4"} err="failed to get container status \"84b76f12475ef340e1341b6bc128c16c887f1e53c05a3271f53f965ea98700c4\": rpc error: code = NotFound desc = could not find container \"84b76f12475ef340e1341b6bc128c16c887f1e53c05a3271f53f965ea98700c4\": container with ID starting with 84b76f12475ef340e1341b6bc128c16c887f1e53c05a3271f53f965ea98700c4 not found: ID does not exist" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.235746 4798 scope.go:117] "RemoveContainer" containerID="945f287f32b651f6703ede87a33f4add9789d56f9d9de217a8b54c6372957867" Feb 03 00:21:03 crc kubenswrapper[4798]: E0203 00:21:03.236273 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"945f287f32b651f6703ede87a33f4add9789d56f9d9de217a8b54c6372957867\": container with ID starting with 945f287f32b651f6703ede87a33f4add9789d56f9d9de217a8b54c6372957867 not found: ID does not exist" containerID="945f287f32b651f6703ede87a33f4add9789d56f9d9de217a8b54c6372957867" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.236297 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"945f287f32b651f6703ede87a33f4add9789d56f9d9de217a8b54c6372957867"} err="failed to get container status \"945f287f32b651f6703ede87a33f4add9789d56f9d9de217a8b54c6372957867\": rpc error: code = NotFound desc = could not find container \"945f287f32b651f6703ede87a33f4add9789d56f9d9de217a8b54c6372957867\": container with ID starting with 945f287f32b651f6703ede87a33f4add9789d56f9d9de217a8b54c6372957867 not found: ID does not exist" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.236310 4798 scope.go:117] "RemoveContainer" containerID="643964eb28dbc6c77ee40bad868223d14c584ebc497de30687b4e892c7fd6d62" Feb 03 00:21:03 crc kubenswrapper[4798]: E0203 00:21:03.236492 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"643964eb28dbc6c77ee40bad868223d14c584ebc497de30687b4e892c7fd6d62\": container with ID starting with 643964eb28dbc6c77ee40bad868223d14c584ebc497de30687b4e892c7fd6d62 not found: ID does not exist" containerID="643964eb28dbc6c77ee40bad868223d14c584ebc497de30687b4e892c7fd6d62" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.236523 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"643964eb28dbc6c77ee40bad868223d14c584ebc497de30687b4e892c7fd6d62"} err="failed to get container status \"643964eb28dbc6c77ee40bad868223d14c584ebc497de30687b4e892c7fd6d62\": rpc error: code = NotFound desc = could not find container \"643964eb28dbc6c77ee40bad868223d14c584ebc497de30687b4e892c7fd6d62\": container with ID starting with 643964eb28dbc6c77ee40bad868223d14c584ebc497de30687b4e892c7fd6d62 not found: ID does not exist" Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.331838 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kx75m"] Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.362039 4798 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kx75m"] Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.370254 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9k8gx"] Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.378028 4798 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9k8gx"] Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.380964 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9xz4k"] Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.390128 4798 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9xz4k"] Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.396028 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j4fw8"] Feb 03 00:21:03 crc kubenswrapper[4798]: I0203 00:21:03.400432 4798 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-j4fw8"] Feb 03 00:21:04 crc kubenswrapper[4798]: I0203 00:21:04.025546 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2zlmb" event={"ID":"f6708d80-8899-4ef9-a0aa-0acb736f01ed","Type":"ContainerStarted","Data":"0348eb973a5b2faa3111449d5a44c0f8e7fab7b7df7b17ad407275117d059e72"} Feb 03 00:21:04 crc kubenswrapper[4798]: I0203 00:21:04.025959 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-2zlmb" Feb 03 00:21:04 crc kubenswrapper[4798]: I0203 00:21:04.029533 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-2zlmb" Feb 03 00:21:04 crc kubenswrapper[4798]: I0203 00:21:04.041503 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-2zlmb" podStartSLOduration=3.04148507 podStartE2EDuration="3.04148507s" podCreationTimestamp="2026-02-03 00:21:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:21:04.040037489 +0000 UTC m=+355.806027510" watchObservedRunningTime="2026-02-03 00:21:04.04148507 +0000 UTC m=+355.807475081" Feb 03 00:21:04 crc kubenswrapper[4798]: I0203 00:21:04.914438 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fafb100-14c8-437f-b5ac-4264b4cbef55" path="/var/lib/kubelet/pods/1fafb100-14c8-437f-b5ac-4264b4cbef55/volumes" Feb 03 00:21:04 crc kubenswrapper[4798]: I0203 00:21:04.915108 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d" path="/var/lib/kubelet/pods/3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d/volumes" Feb 03 00:21:04 crc kubenswrapper[4798]: I0203 00:21:04.915704 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61992978-85f4-4395-b65a-d5efe47c79d8" path="/var/lib/kubelet/pods/61992978-85f4-4395-b65a-d5efe47c79d8/volumes" Feb 03 00:21:04 crc kubenswrapper[4798]: I0203 00:21:04.916879 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81abe77a-25d5-4fe5-a592-e83853be1b63" path="/var/lib/kubelet/pods/81abe77a-25d5-4fe5-a592-e83853be1b63/volumes" Feb 03 00:21:04 crc kubenswrapper[4798]: I0203 00:21:04.917297 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e128d005-b5e0-4da0-b122-3f69a1751d1f" path="/var/lib/kubelet/pods/e128d005-b5e0-4da0-b122-3f69a1751d1f/volumes" Feb 03 00:21:13 crc kubenswrapper[4798]: I0203 00:21:13.867102 4798 patch_prober.go:28] interesting pod/machine-config-daemon-b842j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 00:21:13 crc kubenswrapper[4798]: I0203 00:21:13.867625 4798 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b842j" podUID="c6602c86-f236-4772-b70f-a8b4847b95dd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 00:21:15 crc kubenswrapper[4798]: I0203 00:21:15.610142 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qdvvb"] Feb 03 00:21:15 crc kubenswrapper[4798]: E0203 00:21:15.610355 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61992978-85f4-4395-b65a-d5efe47c79d8" containerName="extract-utilities" Feb 03 00:21:15 crc kubenswrapper[4798]: I0203 00:21:15.610367 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="61992978-85f4-4395-b65a-d5efe47c79d8" containerName="extract-utilities" Feb 03 00:21:15 crc kubenswrapper[4798]: E0203 00:21:15.610378 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fafb100-14c8-437f-b5ac-4264b4cbef55" containerName="extract-utilities" Feb 03 00:21:15 crc kubenswrapper[4798]: I0203 00:21:15.610384 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fafb100-14c8-437f-b5ac-4264b4cbef55" containerName="extract-utilities" Feb 03 00:21:15 crc kubenswrapper[4798]: E0203 00:21:15.610395 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d" containerName="registry-server" Feb 03 00:21:15 crc kubenswrapper[4798]: I0203 00:21:15.610401 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d" containerName="registry-server" Feb 03 00:21:15 crc kubenswrapper[4798]: E0203 00:21:15.610407 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61992978-85f4-4395-b65a-d5efe47c79d8" containerName="extract-content" Feb 03 00:21:15 crc kubenswrapper[4798]: I0203 00:21:15.610413 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="61992978-85f4-4395-b65a-d5efe47c79d8" containerName="extract-content" Feb 03 00:21:15 crc kubenswrapper[4798]: E0203 00:21:15.610419 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d" containerName="extract-utilities" Feb 03 00:21:15 crc kubenswrapper[4798]: I0203 00:21:15.610424 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d" containerName="extract-utilities" Feb 03 00:21:15 crc kubenswrapper[4798]: E0203 00:21:15.610433 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d" containerName="extract-content" Feb 03 00:21:15 crc kubenswrapper[4798]: I0203 00:21:15.610440 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d" containerName="extract-content" Feb 03 00:21:15 crc kubenswrapper[4798]: E0203 00:21:15.610446 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e128d005-b5e0-4da0-b122-3f69a1751d1f" containerName="registry-server" Feb 03 00:21:15 crc kubenswrapper[4798]: I0203 00:21:15.610452 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="e128d005-b5e0-4da0-b122-3f69a1751d1f" containerName="registry-server" Feb 03 00:21:15 crc kubenswrapper[4798]: E0203 00:21:15.610458 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81abe77a-25d5-4fe5-a592-e83853be1b63" containerName="marketplace-operator" Feb 03 00:21:15 crc kubenswrapper[4798]: I0203 00:21:15.610464 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="81abe77a-25d5-4fe5-a592-e83853be1b63" containerName="marketplace-operator" Feb 03 00:21:15 crc kubenswrapper[4798]: E0203 00:21:15.610473 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fafb100-14c8-437f-b5ac-4264b4cbef55" containerName="registry-server" Feb 03 00:21:15 crc kubenswrapper[4798]: I0203 00:21:15.610479 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fafb100-14c8-437f-b5ac-4264b4cbef55" containerName="registry-server" Feb 03 00:21:15 crc kubenswrapper[4798]: E0203 00:21:15.610488 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61992978-85f4-4395-b65a-d5efe47c79d8" containerName="registry-server" Feb 03 00:21:15 crc kubenswrapper[4798]: I0203 00:21:15.610493 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="61992978-85f4-4395-b65a-d5efe47c79d8" containerName="registry-server" Feb 03 00:21:15 crc kubenswrapper[4798]: E0203 00:21:15.610503 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e128d005-b5e0-4da0-b122-3f69a1751d1f" containerName="extract-content" Feb 03 00:21:15 crc kubenswrapper[4798]: I0203 00:21:15.610509 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="e128d005-b5e0-4da0-b122-3f69a1751d1f" containerName="extract-content" Feb 03 00:21:15 crc kubenswrapper[4798]: E0203 00:21:15.610516 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e128d005-b5e0-4da0-b122-3f69a1751d1f" containerName="extract-utilities" Feb 03 00:21:15 crc kubenswrapper[4798]: I0203 00:21:15.610522 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="e128d005-b5e0-4da0-b122-3f69a1751d1f" containerName="extract-utilities" Feb 03 00:21:15 crc kubenswrapper[4798]: E0203 00:21:15.610528 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fafb100-14c8-437f-b5ac-4264b4cbef55" containerName="extract-content" Feb 03 00:21:15 crc kubenswrapper[4798]: I0203 00:21:15.610533 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fafb100-14c8-437f-b5ac-4264b4cbef55" containerName="extract-content" Feb 03 00:21:15 crc kubenswrapper[4798]: I0203 00:21:15.610612 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c9cd4cd-37c3-49ec-b72d-77b2c3f2017d" containerName="registry-server" Feb 03 00:21:15 crc kubenswrapper[4798]: I0203 00:21:15.610623 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="61992978-85f4-4395-b65a-d5efe47c79d8" containerName="registry-server" Feb 03 00:21:15 crc kubenswrapper[4798]: I0203 00:21:15.610633 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="e128d005-b5e0-4da0-b122-3f69a1751d1f" containerName="registry-server" Feb 03 00:21:15 crc kubenswrapper[4798]: I0203 00:21:15.610644 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="81abe77a-25d5-4fe5-a592-e83853be1b63" containerName="marketplace-operator" Feb 03 00:21:15 crc kubenswrapper[4798]: I0203 00:21:15.610668 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fafb100-14c8-437f-b5ac-4264b4cbef55" containerName="registry-server" Feb 03 00:21:15 crc kubenswrapper[4798]: I0203 00:21:15.611334 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qdvvb" Feb 03 00:21:15 crc kubenswrapper[4798]: I0203 00:21:15.614634 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 03 00:21:15 crc kubenswrapper[4798]: I0203 00:21:15.618972 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdvvb"] Feb 03 00:21:15 crc kubenswrapper[4798]: I0203 00:21:15.707549 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8def1db-9ef1-4ab9-a700-2cc12f4eecb8-catalog-content\") pod \"redhat-marketplace-qdvvb\" (UID: \"d8def1db-9ef1-4ab9-a700-2cc12f4eecb8\") " pod="openshift-marketplace/redhat-marketplace-qdvvb" Feb 03 00:21:15 crc kubenswrapper[4798]: I0203 00:21:15.707608 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqtsw\" (UniqueName: \"kubernetes.io/projected/d8def1db-9ef1-4ab9-a700-2cc12f4eecb8-kube-api-access-nqtsw\") pod \"redhat-marketplace-qdvvb\" (UID: \"d8def1db-9ef1-4ab9-a700-2cc12f4eecb8\") " pod="openshift-marketplace/redhat-marketplace-qdvvb" Feb 03 00:21:15 crc kubenswrapper[4798]: I0203 00:21:15.707699 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8def1db-9ef1-4ab9-a700-2cc12f4eecb8-utilities\") pod \"redhat-marketplace-qdvvb\" (UID: \"d8def1db-9ef1-4ab9-a700-2cc12f4eecb8\") " pod="openshift-marketplace/redhat-marketplace-qdvvb" Feb 03 00:21:15 crc kubenswrapper[4798]: I0203 00:21:15.810936 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8def1db-9ef1-4ab9-a700-2cc12f4eecb8-catalog-content\") pod \"redhat-marketplace-qdvvb\" (UID: \"d8def1db-9ef1-4ab9-a700-2cc12f4eecb8\") " pod="openshift-marketplace/redhat-marketplace-qdvvb" Feb 03 00:21:15 crc kubenswrapper[4798]: I0203 00:21:15.810997 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqtsw\" (UniqueName: \"kubernetes.io/projected/d8def1db-9ef1-4ab9-a700-2cc12f4eecb8-kube-api-access-nqtsw\") pod \"redhat-marketplace-qdvvb\" (UID: \"d8def1db-9ef1-4ab9-a700-2cc12f4eecb8\") " pod="openshift-marketplace/redhat-marketplace-qdvvb" Feb 03 00:21:15 crc kubenswrapper[4798]: I0203 00:21:15.811039 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8def1db-9ef1-4ab9-a700-2cc12f4eecb8-utilities\") pod \"redhat-marketplace-qdvvb\" (UID: \"d8def1db-9ef1-4ab9-a700-2cc12f4eecb8\") " pod="openshift-marketplace/redhat-marketplace-qdvvb" Feb 03 00:21:15 crc kubenswrapper[4798]: I0203 00:21:15.811678 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8def1db-9ef1-4ab9-a700-2cc12f4eecb8-utilities\") pod \"redhat-marketplace-qdvvb\" (UID: \"d8def1db-9ef1-4ab9-a700-2cc12f4eecb8\") " pod="openshift-marketplace/redhat-marketplace-qdvvb" Feb 03 00:21:15 crc kubenswrapper[4798]: I0203 00:21:15.811836 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8def1db-9ef1-4ab9-a700-2cc12f4eecb8-catalog-content\") pod \"redhat-marketplace-qdvvb\" (UID: \"d8def1db-9ef1-4ab9-a700-2cc12f4eecb8\") " pod="openshift-marketplace/redhat-marketplace-qdvvb" Feb 03 00:21:15 crc kubenswrapper[4798]: I0203 00:21:15.812810 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z8bgg"] Feb 03 00:21:15 crc kubenswrapper[4798]: I0203 00:21:15.814580 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z8bgg" Feb 03 00:21:15 crc kubenswrapper[4798]: I0203 00:21:15.816995 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 03 00:21:15 crc kubenswrapper[4798]: I0203 00:21:15.816989 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z8bgg"] Feb 03 00:21:15 crc kubenswrapper[4798]: I0203 00:21:15.860551 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqtsw\" (UniqueName: \"kubernetes.io/projected/d8def1db-9ef1-4ab9-a700-2cc12f4eecb8-kube-api-access-nqtsw\") pod \"redhat-marketplace-qdvvb\" (UID: \"d8def1db-9ef1-4ab9-a700-2cc12f4eecb8\") " pod="openshift-marketplace/redhat-marketplace-qdvvb" Feb 03 00:21:15 crc kubenswrapper[4798]: I0203 00:21:15.912545 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxsz6\" (UniqueName: \"kubernetes.io/projected/05b13579-cc4d-48e7-87f2-814f2adf2fdd-kube-api-access-zxsz6\") pod \"redhat-operators-z8bgg\" (UID: \"05b13579-cc4d-48e7-87f2-814f2adf2fdd\") " pod="openshift-marketplace/redhat-operators-z8bgg" Feb 03 00:21:15 crc kubenswrapper[4798]: I0203 00:21:15.912635 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05b13579-cc4d-48e7-87f2-814f2adf2fdd-catalog-content\") pod \"redhat-operators-z8bgg\" (UID: \"05b13579-cc4d-48e7-87f2-814f2adf2fdd\") " pod="openshift-marketplace/redhat-operators-z8bgg" Feb 03 00:21:15 crc kubenswrapper[4798]: I0203 00:21:15.912738 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05b13579-cc4d-48e7-87f2-814f2adf2fdd-utilities\") pod \"redhat-operators-z8bgg\" (UID: \"05b13579-cc4d-48e7-87f2-814f2adf2fdd\") " pod="openshift-marketplace/redhat-operators-z8bgg" Feb 03 00:21:15 crc kubenswrapper[4798]: I0203 00:21:15.930613 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qdvvb" Feb 03 00:21:16 crc kubenswrapper[4798]: I0203 00:21:16.013481 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05b13579-cc4d-48e7-87f2-814f2adf2fdd-catalog-content\") pod \"redhat-operators-z8bgg\" (UID: \"05b13579-cc4d-48e7-87f2-814f2adf2fdd\") " pod="openshift-marketplace/redhat-operators-z8bgg" Feb 03 00:21:16 crc kubenswrapper[4798]: I0203 00:21:16.013805 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05b13579-cc4d-48e7-87f2-814f2adf2fdd-utilities\") pod \"redhat-operators-z8bgg\" (UID: \"05b13579-cc4d-48e7-87f2-814f2adf2fdd\") " pod="openshift-marketplace/redhat-operators-z8bgg" Feb 03 00:21:16 crc kubenswrapper[4798]: I0203 00:21:16.013915 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxsz6\" (UniqueName: \"kubernetes.io/projected/05b13579-cc4d-48e7-87f2-814f2adf2fdd-kube-api-access-zxsz6\") pod \"redhat-operators-z8bgg\" (UID: \"05b13579-cc4d-48e7-87f2-814f2adf2fdd\") " pod="openshift-marketplace/redhat-operators-z8bgg" Feb 03 00:21:16 crc kubenswrapper[4798]: I0203 00:21:16.014070 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05b13579-cc4d-48e7-87f2-814f2adf2fdd-catalog-content\") pod \"redhat-operators-z8bgg\" (UID: \"05b13579-cc4d-48e7-87f2-814f2adf2fdd\") " pod="openshift-marketplace/redhat-operators-z8bgg" Feb 03 00:21:16 crc kubenswrapper[4798]: I0203 00:21:16.014378 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05b13579-cc4d-48e7-87f2-814f2adf2fdd-utilities\") pod \"redhat-operators-z8bgg\" (UID: \"05b13579-cc4d-48e7-87f2-814f2adf2fdd\") " pod="openshift-marketplace/redhat-operators-z8bgg" Feb 03 00:21:16 crc kubenswrapper[4798]: I0203 00:21:16.036188 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxsz6\" (UniqueName: \"kubernetes.io/projected/05b13579-cc4d-48e7-87f2-814f2adf2fdd-kube-api-access-zxsz6\") pod \"redhat-operators-z8bgg\" (UID: \"05b13579-cc4d-48e7-87f2-814f2adf2fdd\") " pod="openshift-marketplace/redhat-operators-z8bgg" Feb 03 00:21:16 crc kubenswrapper[4798]: I0203 00:21:16.127591 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z8bgg" Feb 03 00:21:16 crc kubenswrapper[4798]: I0203 00:21:16.317253 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdvvb"] Feb 03 00:21:16 crc kubenswrapper[4798]: I0203 00:21:16.508439 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z8bgg"] Feb 03 00:21:16 crc kubenswrapper[4798]: W0203 00:21:16.515985 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05b13579_cc4d_48e7_87f2_814f2adf2fdd.slice/crio-7b9f8058d5c4aa687b1d38f35078b1051b3740cf008d413996493e2eb0977f68 WatchSource:0}: Error finding container 7b9f8058d5c4aa687b1d38f35078b1051b3740cf008d413996493e2eb0977f68: Status 404 returned error can't find the container with id 7b9f8058d5c4aa687b1d38f35078b1051b3740cf008d413996493e2eb0977f68 Feb 03 00:21:17 crc kubenswrapper[4798]: I0203 00:21:17.101485 4798 generic.go:334] "Generic (PLEG): container finished" podID="d8def1db-9ef1-4ab9-a700-2cc12f4eecb8" containerID="f5890970047ce63a42dc79cfa89fafeb042f484b41acd3992b3fbd6a41ffcdc9" exitCode=0 Feb 03 00:21:17 crc kubenswrapper[4798]: I0203 00:21:17.101683 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdvvb" event={"ID":"d8def1db-9ef1-4ab9-a700-2cc12f4eecb8","Type":"ContainerDied","Data":"f5890970047ce63a42dc79cfa89fafeb042f484b41acd3992b3fbd6a41ffcdc9"} Feb 03 00:21:17 crc kubenswrapper[4798]: I0203 00:21:17.101858 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdvvb" event={"ID":"d8def1db-9ef1-4ab9-a700-2cc12f4eecb8","Type":"ContainerStarted","Data":"fe89afb57c31525aa0a91b6d9d94b1deb9c75b295ce6b9ca35b75f60d8d97531"} Feb 03 00:21:17 crc kubenswrapper[4798]: I0203 00:21:17.105810 4798 generic.go:334] "Generic (PLEG): container finished" podID="05b13579-cc4d-48e7-87f2-814f2adf2fdd" containerID="5810b9e11435e6792eb8ee35092e82a9ac67bd1a3143449ae6cc552b5885c90b" exitCode=0 Feb 03 00:21:17 crc kubenswrapper[4798]: I0203 00:21:17.105988 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8bgg" event={"ID":"05b13579-cc4d-48e7-87f2-814f2adf2fdd","Type":"ContainerDied","Data":"5810b9e11435e6792eb8ee35092e82a9ac67bd1a3143449ae6cc552b5885c90b"} Feb 03 00:21:17 crc kubenswrapper[4798]: I0203 00:21:17.107756 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8bgg" event={"ID":"05b13579-cc4d-48e7-87f2-814f2adf2fdd","Type":"ContainerStarted","Data":"7b9f8058d5c4aa687b1d38f35078b1051b3740cf008d413996493e2eb0977f68"} Feb 03 00:21:18 crc kubenswrapper[4798]: I0203 00:21:18.206351 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-85k55"] Feb 03 00:21:18 crc kubenswrapper[4798]: I0203 00:21:18.207786 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-85k55" Feb 03 00:21:18 crc kubenswrapper[4798]: I0203 00:21:18.211611 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 03 00:21:18 crc kubenswrapper[4798]: I0203 00:21:18.223292 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-85k55"] Feb 03 00:21:18 crc kubenswrapper[4798]: I0203 00:21:18.349241 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35538c6f-6fc5-4f9b-ab5c-88d343643ccd-utilities\") pod \"certified-operators-85k55\" (UID: \"35538c6f-6fc5-4f9b-ab5c-88d343643ccd\") " pod="openshift-marketplace/certified-operators-85k55" Feb 03 00:21:18 crc kubenswrapper[4798]: I0203 00:21:18.349329 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj49v\" (UniqueName: \"kubernetes.io/projected/35538c6f-6fc5-4f9b-ab5c-88d343643ccd-kube-api-access-bj49v\") pod \"certified-operators-85k55\" (UID: \"35538c6f-6fc5-4f9b-ab5c-88d343643ccd\") " pod="openshift-marketplace/certified-operators-85k55" Feb 03 00:21:18 crc kubenswrapper[4798]: I0203 00:21:18.349376 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35538c6f-6fc5-4f9b-ab5c-88d343643ccd-catalog-content\") pod \"certified-operators-85k55\" (UID: \"35538c6f-6fc5-4f9b-ab5c-88d343643ccd\") " pod="openshift-marketplace/certified-operators-85k55" Feb 03 00:21:18 crc kubenswrapper[4798]: I0203 00:21:18.414135 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jhwdd"] Feb 03 00:21:18 crc kubenswrapper[4798]: I0203 00:21:18.418738 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jhwdd" Feb 03 00:21:18 crc kubenswrapper[4798]: I0203 00:21:18.422721 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jhwdd"] Feb 03 00:21:18 crc kubenswrapper[4798]: I0203 00:21:18.425899 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 03 00:21:18 crc kubenswrapper[4798]: I0203 00:21:18.450519 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35538c6f-6fc5-4f9b-ab5c-88d343643ccd-utilities\") pod \"certified-operators-85k55\" (UID: \"35538c6f-6fc5-4f9b-ab5c-88d343643ccd\") " pod="openshift-marketplace/certified-operators-85k55" Feb 03 00:21:18 crc kubenswrapper[4798]: I0203 00:21:18.450586 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj49v\" (UniqueName: \"kubernetes.io/projected/35538c6f-6fc5-4f9b-ab5c-88d343643ccd-kube-api-access-bj49v\") pod \"certified-operators-85k55\" (UID: \"35538c6f-6fc5-4f9b-ab5c-88d343643ccd\") " pod="openshift-marketplace/certified-operators-85k55" Feb 03 00:21:18 crc kubenswrapper[4798]: I0203 00:21:18.450624 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35538c6f-6fc5-4f9b-ab5c-88d343643ccd-catalog-content\") pod \"certified-operators-85k55\" (UID: \"35538c6f-6fc5-4f9b-ab5c-88d343643ccd\") " pod="openshift-marketplace/certified-operators-85k55" Feb 03 00:21:18 crc kubenswrapper[4798]: I0203 00:21:18.451138 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35538c6f-6fc5-4f9b-ab5c-88d343643ccd-catalog-content\") pod \"certified-operators-85k55\" (UID: \"35538c6f-6fc5-4f9b-ab5c-88d343643ccd\") " pod="openshift-marketplace/certified-operators-85k55" Feb 03 00:21:18 crc kubenswrapper[4798]: I0203 00:21:18.452890 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35538c6f-6fc5-4f9b-ab5c-88d343643ccd-utilities\") pod \"certified-operators-85k55\" (UID: \"35538c6f-6fc5-4f9b-ab5c-88d343643ccd\") " pod="openshift-marketplace/certified-operators-85k55" Feb 03 00:21:18 crc kubenswrapper[4798]: I0203 00:21:18.473226 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj49v\" (UniqueName: \"kubernetes.io/projected/35538c6f-6fc5-4f9b-ab5c-88d343643ccd-kube-api-access-bj49v\") pod \"certified-operators-85k55\" (UID: \"35538c6f-6fc5-4f9b-ab5c-88d343643ccd\") " pod="openshift-marketplace/certified-operators-85k55" Feb 03 00:21:18 crc kubenswrapper[4798]: I0203 00:21:18.525820 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-85k55" Feb 03 00:21:18 crc kubenswrapper[4798]: I0203 00:21:18.552129 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkt77\" (UniqueName: \"kubernetes.io/projected/7d03cb63-eab3-4c46-b6fa-ef7130ac7e72-kube-api-access-rkt77\") pod \"community-operators-jhwdd\" (UID: \"7d03cb63-eab3-4c46-b6fa-ef7130ac7e72\") " pod="openshift-marketplace/community-operators-jhwdd" Feb 03 00:21:18 crc kubenswrapper[4798]: I0203 00:21:18.552272 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d03cb63-eab3-4c46-b6fa-ef7130ac7e72-catalog-content\") pod \"community-operators-jhwdd\" (UID: \"7d03cb63-eab3-4c46-b6fa-ef7130ac7e72\") " pod="openshift-marketplace/community-operators-jhwdd" Feb 03 00:21:18 crc kubenswrapper[4798]: I0203 00:21:18.552390 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d03cb63-eab3-4c46-b6fa-ef7130ac7e72-utilities\") pod \"community-operators-jhwdd\" (UID: \"7d03cb63-eab3-4c46-b6fa-ef7130ac7e72\") " pod="openshift-marketplace/community-operators-jhwdd" Feb 03 00:21:18 crc kubenswrapper[4798]: I0203 00:21:18.653706 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkt77\" (UniqueName: \"kubernetes.io/projected/7d03cb63-eab3-4c46-b6fa-ef7130ac7e72-kube-api-access-rkt77\") pod \"community-operators-jhwdd\" (UID: \"7d03cb63-eab3-4c46-b6fa-ef7130ac7e72\") " pod="openshift-marketplace/community-operators-jhwdd" Feb 03 00:21:18 crc kubenswrapper[4798]: I0203 00:21:18.653767 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d03cb63-eab3-4c46-b6fa-ef7130ac7e72-catalog-content\") pod \"community-operators-jhwdd\" (UID: \"7d03cb63-eab3-4c46-b6fa-ef7130ac7e72\") " pod="openshift-marketplace/community-operators-jhwdd" Feb 03 00:21:18 crc kubenswrapper[4798]: I0203 00:21:18.653811 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d03cb63-eab3-4c46-b6fa-ef7130ac7e72-utilities\") pod \"community-operators-jhwdd\" (UID: \"7d03cb63-eab3-4c46-b6fa-ef7130ac7e72\") " pod="openshift-marketplace/community-operators-jhwdd" Feb 03 00:21:18 crc kubenswrapper[4798]: I0203 00:21:18.654328 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d03cb63-eab3-4c46-b6fa-ef7130ac7e72-utilities\") pod \"community-operators-jhwdd\" (UID: \"7d03cb63-eab3-4c46-b6fa-ef7130ac7e72\") " pod="openshift-marketplace/community-operators-jhwdd" Feb 03 00:21:18 crc kubenswrapper[4798]: I0203 00:21:18.654347 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d03cb63-eab3-4c46-b6fa-ef7130ac7e72-catalog-content\") pod \"community-operators-jhwdd\" (UID: \"7d03cb63-eab3-4c46-b6fa-ef7130ac7e72\") " pod="openshift-marketplace/community-operators-jhwdd" Feb 03 00:21:18 crc kubenswrapper[4798]: I0203 00:21:18.671146 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkt77\" (UniqueName: \"kubernetes.io/projected/7d03cb63-eab3-4c46-b6fa-ef7130ac7e72-kube-api-access-rkt77\") pod \"community-operators-jhwdd\" (UID: \"7d03cb63-eab3-4c46-b6fa-ef7130ac7e72\") " pod="openshift-marketplace/community-operators-jhwdd" Feb 03 00:21:18 crc kubenswrapper[4798]: I0203 00:21:18.734231 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jhwdd" Feb 03 00:21:19 crc kubenswrapper[4798]: I0203 00:21:19.236267 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-r7dmz" Feb 03 00:21:19 crc kubenswrapper[4798]: I0203 00:21:19.296373 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wxw86"] Feb 03 00:21:19 crc kubenswrapper[4798]: I0203 00:21:19.450600 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55b5775f89-psvr8"] Feb 03 00:21:19 crc kubenswrapper[4798]: I0203 00:21:19.451008 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-55b5775f89-psvr8" podUID="32508b48-f159-4a5a-898d-3d5ad26ac7c9" containerName="route-controller-manager" containerID="cri-o://de76936386d38f821cae5939b725070c52c4ef99efde7d334a30c0b5bd4d98e4" gracePeriod=30 Feb 03 00:21:19 crc kubenswrapper[4798]: I0203 00:21:19.674366 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jhwdd"] Feb 03 00:21:19 crc kubenswrapper[4798]: W0203 00:21:19.679512 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d03cb63_eab3_4c46_b6fa_ef7130ac7e72.slice/crio-bdfc5808462e3e4344c115c6ea24ae84ed797a6a48dd8c3e19012eda521cf24a WatchSource:0}: Error finding container bdfc5808462e3e4344c115c6ea24ae84ed797a6a48dd8c3e19012eda521cf24a: Status 404 returned error can't find the container with id bdfc5808462e3e4344c115c6ea24ae84ed797a6a48dd8c3e19012eda521cf24a Feb 03 00:21:19 crc kubenswrapper[4798]: I0203 00:21:19.800774 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-85k55"] Feb 03 00:21:19 crc kubenswrapper[4798]: W0203 00:21:19.805821 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35538c6f_6fc5_4f9b_ab5c_88d343643ccd.slice/crio-53a40addc949a6495b5b97797e93f961f110b1ce7211dbf8b95ee05f07c9f62a WatchSource:0}: Error finding container 53a40addc949a6495b5b97797e93f961f110b1ce7211dbf8b95ee05f07c9f62a: Status 404 returned error can't find the container with id 53a40addc949a6495b5b97797e93f961f110b1ce7211dbf8b95ee05f07c9f62a Feb 03 00:21:20 crc kubenswrapper[4798]: I0203 00:21:20.123531 4798 generic.go:334] "Generic (PLEG): container finished" podID="32508b48-f159-4a5a-898d-3d5ad26ac7c9" containerID="de76936386d38f821cae5939b725070c52c4ef99efde7d334a30c0b5bd4d98e4" exitCode=0 Feb 03 00:21:20 crc kubenswrapper[4798]: I0203 00:21:20.123588 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55b5775f89-psvr8" event={"ID":"32508b48-f159-4a5a-898d-3d5ad26ac7c9","Type":"ContainerDied","Data":"de76936386d38f821cae5939b725070c52c4ef99efde7d334a30c0b5bd4d98e4"} Feb 03 00:21:20 crc kubenswrapper[4798]: I0203 00:21:20.125341 4798 generic.go:334] "Generic (PLEG): container finished" podID="d8def1db-9ef1-4ab9-a700-2cc12f4eecb8" containerID="0ab6458df6d5b64bb4af526e242d00435955e991835dd873ebf67432185a57a9" exitCode=0 Feb 03 00:21:20 crc kubenswrapper[4798]: I0203 00:21:20.125377 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdvvb" event={"ID":"d8def1db-9ef1-4ab9-a700-2cc12f4eecb8","Type":"ContainerDied","Data":"0ab6458df6d5b64bb4af526e242d00435955e991835dd873ebf67432185a57a9"} Feb 03 00:21:20 crc kubenswrapper[4798]: I0203 00:21:20.126261 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-85k55" event={"ID":"35538c6f-6fc5-4f9b-ab5c-88d343643ccd","Type":"ContainerStarted","Data":"53a40addc949a6495b5b97797e93f961f110b1ce7211dbf8b95ee05f07c9f62a"} Feb 03 00:21:20 crc kubenswrapper[4798]: I0203 00:21:20.127000 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jhwdd" event={"ID":"7d03cb63-eab3-4c46-b6fa-ef7130ac7e72","Type":"ContainerStarted","Data":"bdfc5808462e3e4344c115c6ea24ae84ed797a6a48dd8c3e19012eda521cf24a"} Feb 03 00:21:20 crc kubenswrapper[4798]: I0203 00:21:20.128247 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8bgg" event={"ID":"05b13579-cc4d-48e7-87f2-814f2adf2fdd","Type":"ContainerStarted","Data":"af60b79692cec94659d150a747bcb3adf4a72ca22d2bf7e66dbe9d28f652fe1c"} Feb 03 00:21:20 crc kubenswrapper[4798]: I0203 00:21:20.651084 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55b5775f89-psvr8" Feb 03 00:21:20 crc kubenswrapper[4798]: I0203 00:21:20.684530 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c69f646fc-gth27"] Feb 03 00:21:20 crc kubenswrapper[4798]: E0203 00:21:20.684809 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32508b48-f159-4a5a-898d-3d5ad26ac7c9" containerName="route-controller-manager" Feb 03 00:21:20 crc kubenswrapper[4798]: I0203 00:21:20.684825 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="32508b48-f159-4a5a-898d-3d5ad26ac7c9" containerName="route-controller-manager" Feb 03 00:21:20 crc kubenswrapper[4798]: I0203 00:21:20.684951 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="32508b48-f159-4a5a-898d-3d5ad26ac7c9" containerName="route-controller-manager" Feb 03 00:21:20 crc kubenswrapper[4798]: I0203 00:21:20.685377 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c69f646fc-gth27" Feb 03 00:21:20 crc kubenswrapper[4798]: I0203 00:21:20.695917 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c69f646fc-gth27"] Feb 03 00:21:20 crc kubenswrapper[4798]: I0203 00:21:20.777744 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32508b48-f159-4a5a-898d-3d5ad26ac7c9-config\") pod \"32508b48-f159-4a5a-898d-3d5ad26ac7c9\" (UID: \"32508b48-f159-4a5a-898d-3d5ad26ac7c9\") " Feb 03 00:21:20 crc kubenswrapper[4798]: I0203 00:21:20.777827 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32508b48-f159-4a5a-898d-3d5ad26ac7c9-serving-cert\") pod \"32508b48-f159-4a5a-898d-3d5ad26ac7c9\" (UID: \"32508b48-f159-4a5a-898d-3d5ad26ac7c9\") " Feb 03 00:21:20 crc kubenswrapper[4798]: I0203 00:21:20.777856 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32508b48-f159-4a5a-898d-3d5ad26ac7c9-client-ca\") pod \"32508b48-f159-4a5a-898d-3d5ad26ac7c9\" (UID: \"32508b48-f159-4a5a-898d-3d5ad26ac7c9\") " Feb 03 00:21:20 crc kubenswrapper[4798]: I0203 00:21:20.777934 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22ghk\" (UniqueName: \"kubernetes.io/projected/32508b48-f159-4a5a-898d-3d5ad26ac7c9-kube-api-access-22ghk\") pod \"32508b48-f159-4a5a-898d-3d5ad26ac7c9\" (UID: \"32508b48-f159-4a5a-898d-3d5ad26ac7c9\") " Feb 03 00:21:20 crc kubenswrapper[4798]: I0203 00:21:20.778097 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzf6t\" (UniqueName: \"kubernetes.io/projected/18983714-5d3a-47a2-bcf2-8425d01f436c-kube-api-access-kzf6t\") pod \"route-controller-manager-7c69f646fc-gth27\" (UID: \"18983714-5d3a-47a2-bcf2-8425d01f436c\") " pod="openshift-route-controller-manager/route-controller-manager-7c69f646fc-gth27" Feb 03 00:21:20 crc kubenswrapper[4798]: I0203 00:21:20.778131 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18983714-5d3a-47a2-bcf2-8425d01f436c-client-ca\") pod \"route-controller-manager-7c69f646fc-gth27\" (UID: \"18983714-5d3a-47a2-bcf2-8425d01f436c\") " pod="openshift-route-controller-manager/route-controller-manager-7c69f646fc-gth27" Feb 03 00:21:20 crc kubenswrapper[4798]: I0203 00:21:20.778169 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18983714-5d3a-47a2-bcf2-8425d01f436c-config\") pod \"route-controller-manager-7c69f646fc-gth27\" (UID: \"18983714-5d3a-47a2-bcf2-8425d01f436c\") " pod="openshift-route-controller-manager/route-controller-manager-7c69f646fc-gth27" Feb 03 00:21:20 crc kubenswrapper[4798]: I0203 00:21:20.778325 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18983714-5d3a-47a2-bcf2-8425d01f436c-serving-cert\") pod \"route-controller-manager-7c69f646fc-gth27\" (UID: \"18983714-5d3a-47a2-bcf2-8425d01f436c\") " pod="openshift-route-controller-manager/route-controller-manager-7c69f646fc-gth27" Feb 03 00:21:20 crc kubenswrapper[4798]: I0203 00:21:20.778683 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32508b48-f159-4a5a-898d-3d5ad26ac7c9-client-ca" (OuterVolumeSpecName: "client-ca") pod "32508b48-f159-4a5a-898d-3d5ad26ac7c9" (UID: "32508b48-f159-4a5a-898d-3d5ad26ac7c9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:21:20 crc kubenswrapper[4798]: I0203 00:21:20.779181 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32508b48-f159-4a5a-898d-3d5ad26ac7c9-config" (OuterVolumeSpecName: "config") pod "32508b48-f159-4a5a-898d-3d5ad26ac7c9" (UID: "32508b48-f159-4a5a-898d-3d5ad26ac7c9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:21:20 crc kubenswrapper[4798]: I0203 00:21:20.799266 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32508b48-f159-4a5a-898d-3d5ad26ac7c9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "32508b48-f159-4a5a-898d-3d5ad26ac7c9" (UID: "32508b48-f159-4a5a-898d-3d5ad26ac7c9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:21:20 crc kubenswrapper[4798]: I0203 00:21:20.799302 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32508b48-f159-4a5a-898d-3d5ad26ac7c9-kube-api-access-22ghk" (OuterVolumeSpecName: "kube-api-access-22ghk") pod "32508b48-f159-4a5a-898d-3d5ad26ac7c9" (UID: "32508b48-f159-4a5a-898d-3d5ad26ac7c9"). InnerVolumeSpecName "kube-api-access-22ghk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:21:20 crc kubenswrapper[4798]: I0203 00:21:20.879982 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18983714-5d3a-47a2-bcf2-8425d01f436c-serving-cert\") pod \"route-controller-manager-7c69f646fc-gth27\" (UID: \"18983714-5d3a-47a2-bcf2-8425d01f436c\") " pod="openshift-route-controller-manager/route-controller-manager-7c69f646fc-gth27" Feb 03 00:21:20 crc kubenswrapper[4798]: I0203 00:21:20.880068 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzf6t\" (UniqueName: \"kubernetes.io/projected/18983714-5d3a-47a2-bcf2-8425d01f436c-kube-api-access-kzf6t\") pod \"route-controller-manager-7c69f646fc-gth27\" (UID: \"18983714-5d3a-47a2-bcf2-8425d01f436c\") " pod="openshift-route-controller-manager/route-controller-manager-7c69f646fc-gth27" Feb 03 00:21:20 crc kubenswrapper[4798]: I0203 00:21:20.880098 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18983714-5d3a-47a2-bcf2-8425d01f436c-client-ca\") pod \"route-controller-manager-7c69f646fc-gth27\" (UID: \"18983714-5d3a-47a2-bcf2-8425d01f436c\") " pod="openshift-route-controller-manager/route-controller-manager-7c69f646fc-gth27" Feb 03 00:21:20 crc kubenswrapper[4798]: I0203 00:21:20.880133 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18983714-5d3a-47a2-bcf2-8425d01f436c-config\") pod \"route-controller-manager-7c69f646fc-gth27\" (UID: \"18983714-5d3a-47a2-bcf2-8425d01f436c\") " pod="openshift-route-controller-manager/route-controller-manager-7c69f646fc-gth27" Feb 03 00:21:20 crc kubenswrapper[4798]: I0203 00:21:20.881324 4798 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32508b48-f159-4a5a-898d-3d5ad26ac7c9-config\") on node \"crc\" DevicePath \"\"" Feb 03 00:21:20 crc kubenswrapper[4798]: I0203 00:21:20.881387 4798 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32508b48-f159-4a5a-898d-3d5ad26ac7c9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 00:21:20 crc kubenswrapper[4798]: I0203 00:21:20.881402 4798 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32508b48-f159-4a5a-898d-3d5ad26ac7c9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 00:21:20 crc kubenswrapper[4798]: I0203 00:21:20.881418 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22ghk\" (UniqueName: \"kubernetes.io/projected/32508b48-f159-4a5a-898d-3d5ad26ac7c9-kube-api-access-22ghk\") on node \"crc\" DevicePath \"\"" Feb 03 00:21:20 crc kubenswrapper[4798]: I0203 00:21:20.881892 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18983714-5d3a-47a2-bcf2-8425d01f436c-config\") pod \"route-controller-manager-7c69f646fc-gth27\" (UID: \"18983714-5d3a-47a2-bcf2-8425d01f436c\") " pod="openshift-route-controller-manager/route-controller-manager-7c69f646fc-gth27" Feb 03 00:21:20 crc kubenswrapper[4798]: I0203 00:21:20.883674 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18983714-5d3a-47a2-bcf2-8425d01f436c-serving-cert\") pod \"route-controller-manager-7c69f646fc-gth27\" (UID: \"18983714-5d3a-47a2-bcf2-8425d01f436c\") " pod="openshift-route-controller-manager/route-controller-manager-7c69f646fc-gth27" Feb 03 00:21:20 crc kubenswrapper[4798]: I0203 00:21:20.884816 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18983714-5d3a-47a2-bcf2-8425d01f436c-client-ca\") pod \"route-controller-manager-7c69f646fc-gth27\" (UID: \"18983714-5d3a-47a2-bcf2-8425d01f436c\") " pod="openshift-route-controller-manager/route-controller-manager-7c69f646fc-gth27" Feb 03 00:21:20 crc kubenswrapper[4798]: I0203 00:21:20.896116 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzf6t\" (UniqueName: \"kubernetes.io/projected/18983714-5d3a-47a2-bcf2-8425d01f436c-kube-api-access-kzf6t\") pod \"route-controller-manager-7c69f646fc-gth27\" (UID: \"18983714-5d3a-47a2-bcf2-8425d01f436c\") " pod="openshift-route-controller-manager/route-controller-manager-7c69f646fc-gth27" Feb 03 00:21:21 crc kubenswrapper[4798]: I0203 00:21:21.004458 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c69f646fc-gth27" Feb 03 00:21:21 crc kubenswrapper[4798]: I0203 00:21:21.140932 4798 generic.go:334] "Generic (PLEG): container finished" podID="35538c6f-6fc5-4f9b-ab5c-88d343643ccd" containerID="6e12d43ea8f7df8c541399adc779b4fb100081fdc8fd2634bdc8993f31618339" exitCode=0 Feb 03 00:21:21 crc kubenswrapper[4798]: I0203 00:21:21.141200 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-85k55" event={"ID":"35538c6f-6fc5-4f9b-ab5c-88d343643ccd","Type":"ContainerDied","Data":"6e12d43ea8f7df8c541399adc779b4fb100081fdc8fd2634bdc8993f31618339"} Feb 03 00:21:21 crc kubenswrapper[4798]: I0203 00:21:21.145722 4798 generic.go:334] "Generic (PLEG): container finished" podID="7d03cb63-eab3-4c46-b6fa-ef7130ac7e72" containerID="530795dfe2e3f82123534448aba985283ca2afeb64151e5be0d8b72c4cabd07b" exitCode=0 Feb 03 00:21:21 crc kubenswrapper[4798]: I0203 00:21:21.145779 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jhwdd" event={"ID":"7d03cb63-eab3-4c46-b6fa-ef7130ac7e72","Type":"ContainerDied","Data":"530795dfe2e3f82123534448aba985283ca2afeb64151e5be0d8b72c4cabd07b"} Feb 03 00:21:21 crc kubenswrapper[4798]: I0203 00:21:21.150795 4798 generic.go:334] "Generic (PLEG): container finished" podID="05b13579-cc4d-48e7-87f2-814f2adf2fdd" containerID="af60b79692cec94659d150a747bcb3adf4a72ca22d2bf7e66dbe9d28f652fe1c" exitCode=0 Feb 03 00:21:21 crc kubenswrapper[4798]: I0203 00:21:21.150868 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8bgg" event={"ID":"05b13579-cc4d-48e7-87f2-814f2adf2fdd","Type":"ContainerDied","Data":"af60b79692cec94659d150a747bcb3adf4a72ca22d2bf7e66dbe9d28f652fe1c"} Feb 03 00:21:21 crc kubenswrapper[4798]: I0203 00:21:21.165435 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55b5775f89-psvr8" Feb 03 00:21:21 crc kubenswrapper[4798]: I0203 00:21:21.165991 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55b5775f89-psvr8" event={"ID":"32508b48-f159-4a5a-898d-3d5ad26ac7c9","Type":"ContainerDied","Data":"0d97a43be6e7d9f23c133c4606ded941dd10e314bc70881aa4dd3cd71cd2cc0d"} Feb 03 00:21:21 crc kubenswrapper[4798]: I0203 00:21:21.166073 4798 scope.go:117] "RemoveContainer" containerID="de76936386d38f821cae5939b725070c52c4ef99efde7d334a30c0b5bd4d98e4" Feb 03 00:21:21 crc kubenswrapper[4798]: I0203 00:21:21.249548 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55b5775f89-psvr8"] Feb 03 00:21:21 crc kubenswrapper[4798]: I0203 00:21:21.256444 4798 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55b5775f89-psvr8"] Feb 03 00:21:21 crc kubenswrapper[4798]: I0203 00:21:21.261203 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c69f646fc-gth27"] Feb 03 00:21:21 crc kubenswrapper[4798]: W0203 00:21:21.272046 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18983714_5d3a_47a2_bcf2_8425d01f436c.slice/crio-7b0a7b4ea0e34ce3f556e97662d71847ae8ef6de29c9809d269ff16be4d0fe57 WatchSource:0}: Error finding container 7b0a7b4ea0e34ce3f556e97662d71847ae8ef6de29c9809d269ff16be4d0fe57: Status 404 returned error can't find the container with id 7b0a7b4ea0e34ce3f556e97662d71847ae8ef6de29c9809d269ff16be4d0fe57 Feb 03 00:21:22 crc kubenswrapper[4798]: I0203 00:21:22.172090 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdvvb" event={"ID":"d8def1db-9ef1-4ab9-a700-2cc12f4eecb8","Type":"ContainerStarted","Data":"46d8d467a0f228599115e6b4ff105404c9aaaea39bf252a59b89dde5032359f6"} Feb 03 00:21:22 crc kubenswrapper[4798]: I0203 00:21:22.173396 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c69f646fc-gth27" event={"ID":"18983714-5d3a-47a2-bcf2-8425d01f436c","Type":"ContainerStarted","Data":"9e7bd26a699ff08b45de22ee54efaced1a835e6b79f2c26ac425e2ff2fc207c7"} Feb 03 00:21:22 crc kubenswrapper[4798]: I0203 00:21:22.173420 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c69f646fc-gth27" event={"ID":"18983714-5d3a-47a2-bcf2-8425d01f436c","Type":"ContainerStarted","Data":"7b0a7b4ea0e34ce3f556e97662d71847ae8ef6de29c9809d269ff16be4d0fe57"} Feb 03 00:21:22 crc kubenswrapper[4798]: I0203 00:21:22.173682 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7c69f646fc-gth27" Feb 03 00:21:22 crc kubenswrapper[4798]: I0203 00:21:22.180339 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7c69f646fc-gth27" Feb 03 00:21:22 crc kubenswrapper[4798]: I0203 00:21:22.197216 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qdvvb" podStartSLOduration=2.338696753 podStartE2EDuration="7.197196774s" podCreationTimestamp="2026-02-03 00:21:15 +0000 UTC" firstStartedPulling="2026-02-03 00:21:17.104106083 +0000 UTC m=+368.870096094" lastFinishedPulling="2026-02-03 00:21:21.962606104 +0000 UTC m=+373.728596115" observedRunningTime="2026-02-03 00:21:22.192799628 +0000 UTC m=+373.958789639" watchObservedRunningTime="2026-02-03 00:21:22.197196774 +0000 UTC m=+373.963186775" Feb 03 00:21:22 crc kubenswrapper[4798]: I0203 00:21:22.213531 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7c69f646fc-gth27" podStartSLOduration=3.213511364 podStartE2EDuration="3.213511364s" podCreationTimestamp="2026-02-03 00:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:21:22.210770335 +0000 UTC m=+373.976760336" watchObservedRunningTime="2026-02-03 00:21:22.213511364 +0000 UTC m=+373.979501375" Feb 03 00:21:22 crc kubenswrapper[4798]: I0203 00:21:22.925096 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32508b48-f159-4a5a-898d-3d5ad26ac7c9" path="/var/lib/kubelet/pods/32508b48-f159-4a5a-898d-3d5ad26ac7c9/volumes" Feb 03 00:21:24 crc kubenswrapper[4798]: I0203 00:21:24.187245 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8bgg" event={"ID":"05b13579-cc4d-48e7-87f2-814f2adf2fdd","Type":"ContainerStarted","Data":"b02c16108dbafb75c46939ab59562eefe302b3d4c68e904c6623142ac7517449"} Feb 03 00:21:24 crc kubenswrapper[4798]: I0203 00:21:24.189686 4798 generic.go:334] "Generic (PLEG): container finished" podID="35538c6f-6fc5-4f9b-ab5c-88d343643ccd" containerID="e4cb9c1d9aa1dfe9bcf77639929392a938b98792692cc47861415bb5d554f139" exitCode=0 Feb 03 00:21:24 crc kubenswrapper[4798]: I0203 00:21:24.189742 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-85k55" event={"ID":"35538c6f-6fc5-4f9b-ab5c-88d343643ccd","Type":"ContainerDied","Data":"e4cb9c1d9aa1dfe9bcf77639929392a938b98792692cc47861415bb5d554f139"} Feb 03 00:21:24 crc kubenswrapper[4798]: I0203 00:21:24.194380 4798 generic.go:334] "Generic (PLEG): container finished" podID="7d03cb63-eab3-4c46-b6fa-ef7130ac7e72" containerID="0958b7a60859288b2543d0cd7d8184f5642d4be737c334de01174ed4d1659519" exitCode=0 Feb 03 00:21:24 crc kubenswrapper[4798]: I0203 00:21:24.194450 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jhwdd" event={"ID":"7d03cb63-eab3-4c46-b6fa-ef7130ac7e72","Type":"ContainerDied","Data":"0958b7a60859288b2543d0cd7d8184f5642d4be737c334de01174ed4d1659519"} Feb 03 00:21:24 crc kubenswrapper[4798]: I0203 00:21:24.213562 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z8bgg" podStartSLOduration=2.874812541 podStartE2EDuration="9.213545945s" podCreationTimestamp="2026-02-03 00:21:15 +0000 UTC" firstStartedPulling="2026-02-03 00:21:17.10816867 +0000 UTC m=+368.874158681" lastFinishedPulling="2026-02-03 00:21:23.446902074 +0000 UTC m=+375.212892085" observedRunningTime="2026-02-03 00:21:24.210903519 +0000 UTC m=+375.976893530" watchObservedRunningTime="2026-02-03 00:21:24.213545945 +0000 UTC m=+375.979535956" Feb 03 00:21:25 crc kubenswrapper[4798]: I0203 00:21:25.201873 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jhwdd" event={"ID":"7d03cb63-eab3-4c46-b6fa-ef7130ac7e72","Type":"ContainerStarted","Data":"dbd1ddc932735ba15dc2b002b138520058802ec927a9c85c1164ab95c657e4a4"} Feb 03 00:21:25 crc kubenswrapper[4798]: I0203 00:21:25.204691 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-85k55" event={"ID":"35538c6f-6fc5-4f9b-ab5c-88d343643ccd","Type":"ContainerStarted","Data":"d8bfe986558f0bc1896e6351c16a8d1cf84c510daa42542713103770f07942e0"} Feb 03 00:21:25 crc kubenswrapper[4798]: I0203 00:21:25.221413 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jhwdd" podStartSLOduration=3.367042487 podStartE2EDuration="7.221392844s" podCreationTimestamp="2026-02-03 00:21:18 +0000 UTC" firstStartedPulling="2026-02-03 00:21:21.147092558 +0000 UTC m=+372.913082569" lastFinishedPulling="2026-02-03 00:21:25.001442915 +0000 UTC m=+376.767432926" observedRunningTime="2026-02-03 00:21:25.218407458 +0000 UTC m=+376.984397479" watchObservedRunningTime="2026-02-03 00:21:25.221392844 +0000 UTC m=+376.987382855" Feb 03 00:21:25 crc kubenswrapper[4798]: I0203 00:21:25.241451 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-85k55" podStartSLOduration=3.6783989569999997 podStartE2EDuration="7.241433162s" podCreationTimestamp="2026-02-03 00:21:18 +0000 UTC" firstStartedPulling="2026-02-03 00:21:21.142686411 +0000 UTC m=+372.908676422" lastFinishedPulling="2026-02-03 00:21:24.705720616 +0000 UTC m=+376.471710627" observedRunningTime="2026-02-03 00:21:25.234871073 +0000 UTC m=+377.000861084" watchObservedRunningTime="2026-02-03 00:21:25.241433162 +0000 UTC m=+377.007423173" Feb 03 00:21:25 crc kubenswrapper[4798]: I0203 00:21:25.931732 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qdvvb" Feb 03 00:21:25 crc kubenswrapper[4798]: I0203 00:21:25.932026 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qdvvb" Feb 03 00:21:25 crc kubenswrapper[4798]: I0203 00:21:25.975781 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qdvvb" Feb 03 00:21:26 crc kubenswrapper[4798]: I0203 00:21:26.128701 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z8bgg" Feb 03 00:21:26 crc kubenswrapper[4798]: I0203 00:21:26.128765 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z8bgg" Feb 03 00:21:27 crc kubenswrapper[4798]: I0203 00:21:27.174466 4798 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z8bgg" podUID="05b13579-cc4d-48e7-87f2-814f2adf2fdd" containerName="registry-server" probeResult="failure" output=< Feb 03 00:21:27 crc kubenswrapper[4798]: timeout: failed to connect service ":50051" within 1s Feb 03 00:21:27 crc kubenswrapper[4798]: > Feb 03 00:21:27 crc kubenswrapper[4798]: I0203 00:21:27.253409 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qdvvb" Feb 03 00:21:28 crc kubenswrapper[4798]: I0203 00:21:28.526294 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-85k55" Feb 03 00:21:28 crc kubenswrapper[4798]: I0203 00:21:28.526374 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-85k55" Feb 03 00:21:28 crc kubenswrapper[4798]: I0203 00:21:28.564880 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-85k55" Feb 03 00:21:28 crc kubenswrapper[4798]: I0203 00:21:28.735243 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jhwdd" Feb 03 00:21:28 crc kubenswrapper[4798]: I0203 00:21:28.735302 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jhwdd" Feb 03 00:21:28 crc kubenswrapper[4798]: I0203 00:21:28.773534 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jhwdd" Feb 03 00:21:29 crc kubenswrapper[4798]: I0203 00:21:29.266414 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-85k55" Feb 03 00:21:36 crc kubenswrapper[4798]: I0203 00:21:36.174009 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z8bgg" Feb 03 00:21:36 crc kubenswrapper[4798]: I0203 00:21:36.259897 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z8bgg" Feb 03 00:21:38 crc kubenswrapper[4798]: I0203 00:21:38.767970 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jhwdd" Feb 03 00:21:43 crc kubenswrapper[4798]: I0203 00:21:43.866696 4798 patch_prober.go:28] interesting pod/machine-config-daemon-b842j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 00:21:43 crc kubenswrapper[4798]: I0203 00:21:43.867069 4798 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b842j" podUID="c6602c86-f236-4772-b70f-a8b4847b95dd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 00:21:44 crc kubenswrapper[4798]: I0203 00:21:44.344228 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" podUID="691167fd-4218-4be3-bd41-39486e614ab4" containerName="registry" containerID="cri-o://3c092d178cf7e903597b862893bacdeae6c8534888c67e938bb262c70dfbf8e4" gracePeriod=30 Feb 03 00:21:44 crc kubenswrapper[4798]: I0203 00:21:44.729058 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:21:44 crc kubenswrapper[4798]: I0203 00:21:44.817191 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"691167fd-4218-4be3-bd41-39486e614ab4\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " Feb 03 00:21:44 crc kubenswrapper[4798]: I0203 00:21:44.817239 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/691167fd-4218-4be3-bd41-39486e614ab4-registry-certificates\") pod \"691167fd-4218-4be3-bd41-39486e614ab4\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " Feb 03 00:21:44 crc kubenswrapper[4798]: I0203 00:21:44.817265 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/691167fd-4218-4be3-bd41-39486e614ab4-trusted-ca\") pod \"691167fd-4218-4be3-bd41-39486e614ab4\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " Feb 03 00:21:44 crc kubenswrapper[4798]: I0203 00:21:44.817310 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/691167fd-4218-4be3-bd41-39486e614ab4-registry-tls\") pod \"691167fd-4218-4be3-bd41-39486e614ab4\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " Feb 03 00:21:44 crc kubenswrapper[4798]: I0203 00:21:44.817335 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/691167fd-4218-4be3-bd41-39486e614ab4-installation-pull-secrets\") pod \"691167fd-4218-4be3-bd41-39486e614ab4\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " Feb 03 00:21:44 crc kubenswrapper[4798]: I0203 00:21:44.817393 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/691167fd-4218-4be3-bd41-39486e614ab4-bound-sa-token\") pod \"691167fd-4218-4be3-bd41-39486e614ab4\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " Feb 03 00:21:44 crc kubenswrapper[4798]: I0203 00:21:44.817430 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzcls\" (UniqueName: \"kubernetes.io/projected/691167fd-4218-4be3-bd41-39486e614ab4-kube-api-access-dzcls\") pod \"691167fd-4218-4be3-bd41-39486e614ab4\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " Feb 03 00:21:44 crc kubenswrapper[4798]: I0203 00:21:44.817471 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/691167fd-4218-4be3-bd41-39486e614ab4-ca-trust-extracted\") pod \"691167fd-4218-4be3-bd41-39486e614ab4\" (UID: \"691167fd-4218-4be3-bd41-39486e614ab4\") " Feb 03 00:21:44 crc kubenswrapper[4798]: I0203 00:21:44.819871 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/691167fd-4218-4be3-bd41-39486e614ab4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "691167fd-4218-4be3-bd41-39486e614ab4" (UID: "691167fd-4218-4be3-bd41-39486e614ab4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:21:44 crc kubenswrapper[4798]: I0203 00:21:44.820165 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/691167fd-4218-4be3-bd41-39486e614ab4-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "691167fd-4218-4be3-bd41-39486e614ab4" (UID: "691167fd-4218-4be3-bd41-39486e614ab4"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:21:44 crc kubenswrapper[4798]: I0203 00:21:44.823612 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/691167fd-4218-4be3-bd41-39486e614ab4-kube-api-access-dzcls" (OuterVolumeSpecName: "kube-api-access-dzcls") pod "691167fd-4218-4be3-bd41-39486e614ab4" (UID: "691167fd-4218-4be3-bd41-39486e614ab4"). InnerVolumeSpecName "kube-api-access-dzcls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:21:44 crc kubenswrapper[4798]: I0203 00:21:44.824632 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/691167fd-4218-4be3-bd41-39486e614ab4-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "691167fd-4218-4be3-bd41-39486e614ab4" (UID: "691167fd-4218-4be3-bd41-39486e614ab4"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:21:44 crc kubenswrapper[4798]: I0203 00:21:44.828128 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "691167fd-4218-4be3-bd41-39486e614ab4" (UID: "691167fd-4218-4be3-bd41-39486e614ab4"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 03 00:21:44 crc kubenswrapper[4798]: I0203 00:21:44.829800 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/691167fd-4218-4be3-bd41-39486e614ab4-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "691167fd-4218-4be3-bd41-39486e614ab4" (UID: "691167fd-4218-4be3-bd41-39486e614ab4"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:21:44 crc kubenswrapper[4798]: I0203 00:21:44.831376 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/691167fd-4218-4be3-bd41-39486e614ab4-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "691167fd-4218-4be3-bd41-39486e614ab4" (UID: "691167fd-4218-4be3-bd41-39486e614ab4"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:21:44 crc kubenswrapper[4798]: I0203 00:21:44.834084 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/691167fd-4218-4be3-bd41-39486e614ab4-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "691167fd-4218-4be3-bd41-39486e614ab4" (UID: "691167fd-4218-4be3-bd41-39486e614ab4"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:21:44 crc kubenswrapper[4798]: I0203 00:21:44.919690 4798 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/691167fd-4218-4be3-bd41-39486e614ab4-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 03 00:21:44 crc kubenswrapper[4798]: I0203 00:21:44.919724 4798 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/691167fd-4218-4be3-bd41-39486e614ab4-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 03 00:21:44 crc kubenswrapper[4798]: I0203 00:21:44.919736 4798 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/691167fd-4218-4be3-bd41-39486e614ab4-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 03 00:21:44 crc kubenswrapper[4798]: I0203 00:21:44.919745 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzcls\" (UniqueName: \"kubernetes.io/projected/691167fd-4218-4be3-bd41-39486e614ab4-kube-api-access-dzcls\") on node \"crc\" DevicePath \"\"" Feb 03 00:21:44 crc kubenswrapper[4798]: I0203 00:21:44.919753 4798 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/691167fd-4218-4be3-bd41-39486e614ab4-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 03 00:21:44 crc kubenswrapper[4798]: I0203 00:21:44.919765 4798 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/691167fd-4218-4be3-bd41-39486e614ab4-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 03 00:21:44 crc kubenswrapper[4798]: I0203 00:21:44.919775 4798 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/691167fd-4218-4be3-bd41-39486e614ab4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 00:21:45 crc kubenswrapper[4798]: I0203 00:21:45.313314 4798 generic.go:334] "Generic (PLEG): container finished" podID="691167fd-4218-4be3-bd41-39486e614ab4" containerID="3c092d178cf7e903597b862893bacdeae6c8534888c67e938bb262c70dfbf8e4" exitCode=0 Feb 03 00:21:45 crc kubenswrapper[4798]: I0203 00:21:45.313364 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" Feb 03 00:21:45 crc kubenswrapper[4798]: I0203 00:21:45.313363 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" event={"ID":"691167fd-4218-4be3-bd41-39486e614ab4","Type":"ContainerDied","Data":"3c092d178cf7e903597b862893bacdeae6c8534888c67e938bb262c70dfbf8e4"} Feb 03 00:21:45 crc kubenswrapper[4798]: I0203 00:21:45.313408 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wxw86" event={"ID":"691167fd-4218-4be3-bd41-39486e614ab4","Type":"ContainerDied","Data":"3829c6cf8926f3f13edb1e121be0e242a84b742983f5ec9ef8d3033a0e34ec28"} Feb 03 00:21:45 crc kubenswrapper[4798]: I0203 00:21:45.313426 4798 scope.go:117] "RemoveContainer" containerID="3c092d178cf7e903597b862893bacdeae6c8534888c67e938bb262c70dfbf8e4" Feb 03 00:21:45 crc kubenswrapper[4798]: I0203 00:21:45.329870 4798 scope.go:117] "RemoveContainer" containerID="3c092d178cf7e903597b862893bacdeae6c8534888c67e938bb262c70dfbf8e4" Feb 03 00:21:45 crc kubenswrapper[4798]: E0203 00:21:45.330437 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c092d178cf7e903597b862893bacdeae6c8534888c67e938bb262c70dfbf8e4\": container with ID starting with 3c092d178cf7e903597b862893bacdeae6c8534888c67e938bb262c70dfbf8e4 not found: ID does not exist" containerID="3c092d178cf7e903597b862893bacdeae6c8534888c67e938bb262c70dfbf8e4" Feb 03 00:21:45 crc kubenswrapper[4798]: I0203 00:21:45.330476 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c092d178cf7e903597b862893bacdeae6c8534888c67e938bb262c70dfbf8e4"} err="failed to get container status \"3c092d178cf7e903597b862893bacdeae6c8534888c67e938bb262c70dfbf8e4\": rpc error: code = NotFound desc = could not find container \"3c092d178cf7e903597b862893bacdeae6c8534888c67e938bb262c70dfbf8e4\": container with ID starting with 3c092d178cf7e903597b862893bacdeae6c8534888c67e938bb262c70dfbf8e4 not found: ID does not exist" Feb 03 00:21:45 crc kubenswrapper[4798]: I0203 00:21:45.332839 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wxw86"] Feb 03 00:21:45 crc kubenswrapper[4798]: I0203 00:21:45.342446 4798 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wxw86"] Feb 03 00:21:46 crc kubenswrapper[4798]: I0203 00:21:46.915749 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="691167fd-4218-4be3-bd41-39486e614ab4" path="/var/lib/kubelet/pods/691167fd-4218-4be3-bd41-39486e614ab4/volumes" Feb 03 00:22:13 crc kubenswrapper[4798]: I0203 00:22:13.866608 4798 patch_prober.go:28] interesting pod/machine-config-daemon-b842j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 00:22:13 crc kubenswrapper[4798]: I0203 00:22:13.867108 4798 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b842j" podUID="c6602c86-f236-4772-b70f-a8b4847b95dd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 00:22:13 crc kubenswrapper[4798]: I0203 00:22:13.867154 4798 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b842j" Feb 03 00:22:13 crc kubenswrapper[4798]: I0203 00:22:13.867794 4798 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e0abb4fba18cc39f2fad2bd7c8dc346fe38eaaf9c13b0911c25b8caac9f640e9"} pod="openshift-machine-config-operator/machine-config-daemon-b842j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 00:22:13 crc kubenswrapper[4798]: I0203 00:22:13.867869 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b842j" podUID="c6602c86-f236-4772-b70f-a8b4847b95dd" containerName="machine-config-daemon" containerID="cri-o://e0abb4fba18cc39f2fad2bd7c8dc346fe38eaaf9c13b0911c25b8caac9f640e9" gracePeriod=600 Feb 03 00:22:14 crc kubenswrapper[4798]: I0203 00:22:14.488670 4798 generic.go:334] "Generic (PLEG): container finished" podID="c6602c86-f236-4772-b70f-a8b4847b95dd" containerID="e0abb4fba18cc39f2fad2bd7c8dc346fe38eaaf9c13b0911c25b8caac9f640e9" exitCode=0 Feb 03 00:22:14 crc kubenswrapper[4798]: I0203 00:22:14.488686 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" event={"ID":"c6602c86-f236-4772-b70f-a8b4847b95dd","Type":"ContainerDied","Data":"e0abb4fba18cc39f2fad2bd7c8dc346fe38eaaf9c13b0911c25b8caac9f640e9"} Feb 03 00:22:14 crc kubenswrapper[4798]: I0203 00:22:14.488743 4798 scope.go:117] "RemoveContainer" containerID="13a062edfbf637aa7eed0473b03b10011ed3e0b8647ca701c9b67bd16abe6fd4" Feb 03 00:22:15 crc kubenswrapper[4798]: I0203 00:22:15.501520 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" event={"ID":"c6602c86-f236-4772-b70f-a8b4847b95dd","Type":"ContainerStarted","Data":"0f03b0766898d960db7dfb7d118abdc25b38ac734616e34343665c5d2613c7bb"} Feb 03 00:24:43 crc kubenswrapper[4798]: I0203 00:24:43.866859 4798 patch_prober.go:28] interesting pod/machine-config-daemon-b842j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 00:24:43 crc kubenswrapper[4798]: I0203 00:24:43.867383 4798 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b842j" podUID="c6602c86-f236-4772-b70f-a8b4847b95dd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 00:25:13 crc kubenswrapper[4798]: I0203 00:25:13.867550 4798 patch_prober.go:28] interesting pod/machine-config-daemon-b842j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 00:25:13 crc kubenswrapper[4798]: I0203 00:25:13.868440 4798 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b842j" podUID="c6602c86-f236-4772-b70f-a8b4847b95dd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.061579 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gzlj4"] Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.062999 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="ovn-controller" containerID="cri-o://ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5" gracePeriod=30 Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.063083 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="nbdb" containerID="cri-o://f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848" gracePeriod=30 Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.063209 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="northd" containerID="cri-o://b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8" gracePeriod=30 Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.063272 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="kube-rbac-proxy-node" containerID="cri-o://f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd" gracePeriod=30 Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.063295 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="ovn-acl-logging" containerID="cri-o://993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3" gracePeriod=30 Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.063156 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="sbdb" containerID="cri-o://5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019" gracePeriod=30 Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.063299 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34" gracePeriod=30 Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.167352 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="ovnkube-controller" containerID="cri-o://12ce9d144d65989d5f9710264f741702fa932ce66fb41cc0b22042ea7ed76018" gracePeriod=30 Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.244285 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ktf4c_106da5aa-5f2e-4d32-b172-4844ad6de7f6/kube-multus/2.log" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.244709 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ktf4c_106da5aa-5f2e-4d32-b172-4844ad6de7f6/kube-multus/1.log" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.244754 4798 generic.go:334] "Generic (PLEG): container finished" podID="106da5aa-5f2e-4d32-b172-4844ad6de7f6" containerID="3f41354d4d52290d36062ee17c80b961a674625ae7ecc249447b0a9b6b9b2415" exitCode=2 Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.244826 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ktf4c" event={"ID":"106da5aa-5f2e-4d32-b172-4844ad6de7f6","Type":"ContainerDied","Data":"3f41354d4d52290d36062ee17c80b961a674625ae7ecc249447b0a9b6b9b2415"} Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.244867 4798 scope.go:117] "RemoveContainer" containerID="f6c6b9ea2222959f244be8e386750c0d7bdd1a2f340aca554cc8b990c3907ba7" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.245461 4798 scope.go:117] "RemoveContainer" containerID="3f41354d4d52290d36062ee17c80b961a674625ae7ecc249447b0a9b6b9b2415" Feb 03 00:25:43 crc kubenswrapper[4798]: E0203 00:25:43.245642 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-ktf4c_openshift-multus(106da5aa-5f2e-4d32-b172-4844ad6de7f6)\"" pod="openshift-multus/multus-ktf4c" podUID="106da5aa-5f2e-4d32-b172-4844ad6de7f6" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.247006 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gzlj4_b71790a2-e390-400a-a288-2a3af8467047/ovnkube-controller/3.log" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.248705 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gzlj4_b71790a2-e390-400a-a288-2a3af8467047/ovn-acl-logging/0.log" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.249073 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gzlj4_b71790a2-e390-400a-a288-2a3af8467047/ovn-controller/0.log" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.249354 4798 generic.go:334] "Generic (PLEG): container finished" podID="b71790a2-e390-400a-a288-2a3af8467047" containerID="4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34" exitCode=0 Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.249375 4798 generic.go:334] "Generic (PLEG): container finished" podID="b71790a2-e390-400a-a288-2a3af8467047" containerID="f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd" exitCode=0 Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.249382 4798 generic.go:334] "Generic (PLEG): container finished" podID="b71790a2-e390-400a-a288-2a3af8467047" containerID="993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3" exitCode=143 Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.249388 4798 generic.go:334] "Generic (PLEG): container finished" podID="b71790a2-e390-400a-a288-2a3af8467047" containerID="ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5" exitCode=143 Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.249406 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" event={"ID":"b71790a2-e390-400a-a288-2a3af8467047","Type":"ContainerDied","Data":"4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34"} Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.249429 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" event={"ID":"b71790a2-e390-400a-a288-2a3af8467047","Type":"ContainerDied","Data":"f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd"} Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.249439 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" event={"ID":"b71790a2-e390-400a-a288-2a3af8467047","Type":"ContainerDied","Data":"993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3"} Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.249448 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" event={"ID":"b71790a2-e390-400a-a288-2a3af8467047","Type":"ContainerDied","Data":"ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5"} Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.475784 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gzlj4_b71790a2-e390-400a-a288-2a3af8467047/ovnkube-controller/3.log" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.478116 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gzlj4_b71790a2-e390-400a-a288-2a3af8467047/ovn-acl-logging/0.log" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.478539 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gzlj4_b71790a2-e390-400a-a288-2a3af8467047/ovn-controller/0.log" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.479014 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.546056 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-kx2zk"] Feb 03 00:25:43 crc kubenswrapper[4798]: E0203 00:25:43.547219 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="northd" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.547336 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="northd" Feb 03 00:25:43 crc kubenswrapper[4798]: E0203 00:25:43.547447 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="691167fd-4218-4be3-bd41-39486e614ab4" containerName="registry" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.547539 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="691167fd-4218-4be3-bd41-39486e614ab4" containerName="registry" Feb 03 00:25:43 crc kubenswrapper[4798]: E0203 00:25:43.547820 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="ovnkube-controller" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.547932 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="ovnkube-controller" Feb 03 00:25:43 crc kubenswrapper[4798]: E0203 00:25:43.548023 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="ovnkube-controller" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.548108 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="ovnkube-controller" Feb 03 00:25:43 crc kubenswrapper[4798]: E0203 00:25:43.548203 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="sbdb" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.548305 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="sbdb" Feb 03 00:25:43 crc kubenswrapper[4798]: E0203 00:25:43.548425 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="kube-rbac-proxy-node" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.548526 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="kube-rbac-proxy-node" Feb 03 00:25:43 crc kubenswrapper[4798]: E0203 00:25:43.548617 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="ovn-acl-logging" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.548756 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="ovn-acl-logging" Feb 03 00:25:43 crc kubenswrapper[4798]: E0203 00:25:43.548856 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="ovnkube-controller" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.548947 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="ovnkube-controller" Feb 03 00:25:43 crc kubenswrapper[4798]: E0203 00:25:43.549047 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="ovn-controller" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.549329 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="ovn-controller" Feb 03 00:25:43 crc kubenswrapper[4798]: E0203 00:25:43.549425 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="kubecfg-setup" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.549514 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="kubecfg-setup" Feb 03 00:25:43 crc kubenswrapper[4798]: E0203 00:25:43.549610 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="nbdb" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.549786 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="nbdb" Feb 03 00:25:43 crc kubenswrapper[4798]: E0203 00:25:43.549965 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="kube-rbac-proxy-ovn-metrics" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.550094 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="kube-rbac-proxy-ovn-metrics" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.550531 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="kube-rbac-proxy-ovn-metrics" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.551977 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="ovnkube-controller" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.552166 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="northd" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.552223 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="kube-rbac-proxy-node" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.552278 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="691167fd-4218-4be3-bd41-39486e614ab4" containerName="registry" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.552330 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="ovnkube-controller" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.552412 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="nbdb" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.552475 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="sbdb" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.552531 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="ovn-controller" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.552618 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="ovn-acl-logging" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.552710 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="ovnkube-controller" Feb 03 00:25:43 crc kubenswrapper[4798]: E0203 00:25:43.553047 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="ovnkube-controller" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.553130 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="ovnkube-controller" Feb 03 00:25:43 crc kubenswrapper[4798]: E0203 00:25:43.553191 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="ovnkube-controller" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.553240 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="ovnkube-controller" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.553514 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="ovnkube-controller" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.553602 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="b71790a2-e390-400a-a288-2a3af8467047" containerName="ovnkube-controller" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.555337 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.581850 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b71790a2-e390-400a-a288-2a3af8467047-ovn-node-metrics-cert\") pod \"b71790a2-e390-400a-a288-2a3af8467047\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.582226 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-node-log\") pod \"b71790a2-e390-400a-a288-2a3af8467047\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.582258 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-run-ovn\") pod \"b71790a2-e390-400a-a288-2a3af8467047\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.582280 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-host-cni-netd\") pod \"b71790a2-e390-400a-a288-2a3af8467047\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.582302 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-host-run-netns\") pod \"b71790a2-e390-400a-a288-2a3af8467047\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.582323 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-run-systemd\") pod \"b71790a2-e390-400a-a288-2a3af8467047\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.582361 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b71790a2-e390-400a-a288-2a3af8467047-ovnkube-script-lib\") pod \"b71790a2-e390-400a-a288-2a3af8467047\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.582383 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-log-socket\") pod \"b71790a2-e390-400a-a288-2a3af8467047\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.582411 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b71790a2-e390-400a-a288-2a3af8467047-env-overrides\") pod \"b71790a2-e390-400a-a288-2a3af8467047\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.582435 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-host-run-ovn-kubernetes\") pod \"b71790a2-e390-400a-a288-2a3af8467047\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.582444 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "b71790a2-e390-400a-a288-2a3af8467047" (UID: "b71790a2-e390-400a-a288-2a3af8467047"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.582465 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-host-cni-bin\") pod \"b71790a2-e390-400a-a288-2a3af8467047\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.582497 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "b71790a2-e390-400a-a288-2a3af8467047" (UID: "b71790a2-e390-400a-a288-2a3af8467047"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.582513 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "b71790a2-e390-400a-a288-2a3af8467047" (UID: "b71790a2-e390-400a-a288-2a3af8467047"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.582631 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "b71790a2-e390-400a-a288-2a3af8467047" (UID: "b71790a2-e390-400a-a288-2a3af8467047"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.582541 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-node-log" (OuterVolumeSpecName: "node-log") pod "b71790a2-e390-400a-a288-2a3af8467047" (UID: "b71790a2-e390-400a-a288-2a3af8467047"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.582550 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "b71790a2-e390-400a-a288-2a3af8467047" (UID: "b71790a2-e390-400a-a288-2a3af8467047"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.582575 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "b71790a2-e390-400a-a288-2a3af8467047" (UID: "b71790a2-e390-400a-a288-2a3af8467047"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.582577 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-log-socket" (OuterVolumeSpecName: "log-socket") pod "b71790a2-e390-400a-a288-2a3af8467047" (UID: "b71790a2-e390-400a-a288-2a3af8467047"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.582527 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-host-kubelet\") pod \"b71790a2-e390-400a-a288-2a3af8467047\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.582738 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-systemd-units\") pod \"b71790a2-e390-400a-a288-2a3af8467047\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.582778 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-host-var-lib-cni-networks-ovn-kubernetes\") pod \"b71790a2-e390-400a-a288-2a3af8467047\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.582797 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxchl\" (UniqueName: \"kubernetes.io/projected/b71790a2-e390-400a-a288-2a3af8467047-kube-api-access-lxchl\") pod \"b71790a2-e390-400a-a288-2a3af8467047\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.582819 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b71790a2-e390-400a-a288-2a3af8467047-ovnkube-config\") pod \"b71790a2-e390-400a-a288-2a3af8467047\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.582854 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-etc-openvswitch\") pod \"b71790a2-e390-400a-a288-2a3af8467047\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.582869 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-var-lib-openvswitch\") pod \"b71790a2-e390-400a-a288-2a3af8467047\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.582886 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-host-slash\") pod \"b71790a2-e390-400a-a288-2a3af8467047\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.582902 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-run-openvswitch\") pod \"b71790a2-e390-400a-a288-2a3af8467047\" (UID: \"b71790a2-e390-400a-a288-2a3af8467047\") " Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.582919 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b71790a2-e390-400a-a288-2a3af8467047-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "b71790a2-e390-400a-a288-2a3af8467047" (UID: "b71790a2-e390-400a-a288-2a3af8467047"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.583209 4798 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-node-log\") on node \"crc\" DevicePath \"\"" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.583221 4798 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.583230 4798 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.583239 4798 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.583247 4798 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-log-socket\") on node \"crc\" DevicePath \"\"" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.583256 4798 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b71790a2-e390-400a-a288-2a3af8467047-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.583255 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b71790a2-e390-400a-a288-2a3af8467047-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "b71790a2-e390-400a-a288-2a3af8467047" (UID: "b71790a2-e390-400a-a288-2a3af8467047"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.583264 4798 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.583281 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "b71790a2-e390-400a-a288-2a3af8467047" (UID: "b71790a2-e390-400a-a288-2a3af8467047"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.583290 4798 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.583299 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "b71790a2-e390-400a-a288-2a3af8467047" (UID: "b71790a2-e390-400a-a288-2a3af8467047"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.583304 4798 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.583315 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "b71790a2-e390-400a-a288-2a3af8467047" (UID: "b71790a2-e390-400a-a288-2a3af8467047"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.583331 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-host-slash" (OuterVolumeSpecName: "host-slash") pod "b71790a2-e390-400a-a288-2a3af8467047" (UID: "b71790a2-e390-400a-a288-2a3af8467047"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.583337 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "b71790a2-e390-400a-a288-2a3af8467047" (UID: "b71790a2-e390-400a-a288-2a3af8467047"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.583356 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b71790a2-e390-400a-a288-2a3af8467047-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "b71790a2-e390-400a-a288-2a3af8467047" (UID: "b71790a2-e390-400a-a288-2a3af8467047"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.583360 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "b71790a2-e390-400a-a288-2a3af8467047" (UID: "b71790a2-e390-400a-a288-2a3af8467047"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.588801 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b71790a2-e390-400a-a288-2a3af8467047-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "b71790a2-e390-400a-a288-2a3af8467047" (UID: "b71790a2-e390-400a-a288-2a3af8467047"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.589615 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b71790a2-e390-400a-a288-2a3af8467047-kube-api-access-lxchl" (OuterVolumeSpecName: "kube-api-access-lxchl") pod "b71790a2-e390-400a-a288-2a3af8467047" (UID: "b71790a2-e390-400a-a288-2a3af8467047"). InnerVolumeSpecName "kube-api-access-lxchl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.596062 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "b71790a2-e390-400a-a288-2a3af8467047" (UID: "b71790a2-e390-400a-a288-2a3af8467047"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.684642 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5621abfc-74ab-4b61-a1c4-c90456171d20-ovnkube-script-lib\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.684716 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-host-cni-netd\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.684770 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-host-slash\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.684793 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-systemd-units\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.684883 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-run-openvswitch\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.685001 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-run-systemd\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.685091 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-host-cni-bin\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.685119 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b6k9\" (UniqueName: \"kubernetes.io/projected/5621abfc-74ab-4b61-a1c4-c90456171d20-kube-api-access-4b6k9\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.685162 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-run-ovn\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.685200 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-host-run-ovn-kubernetes\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.685250 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-var-lib-openvswitch\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.685329 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-node-log\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.685362 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.685386 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5621abfc-74ab-4b61-a1c4-c90456171d20-ovnkube-config\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.685411 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-host-run-netns\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.685436 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-log-socket\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.685461 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-host-kubelet\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.685486 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5621abfc-74ab-4b61-a1c4-c90456171d20-env-overrides\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.685506 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-etc-openvswitch\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.685535 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5621abfc-74ab-4b61-a1c4-c90456171d20-ovn-node-metrics-cert\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.685581 4798 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.685596 4798 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-host-slash\") on node \"crc\" DevicePath \"\"" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.685607 4798 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.685619 4798 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.685631 4798 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b71790a2-e390-400a-a288-2a3af8467047-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.685644 4798 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.685685 4798 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b71790a2-e390-400a-a288-2a3af8467047-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.685698 4798 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.685710 4798 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b71790a2-e390-400a-a288-2a3af8467047-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.685721 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxchl\" (UniqueName: \"kubernetes.io/projected/b71790a2-e390-400a-a288-2a3af8467047-kube-api-access-lxchl\") on node \"crc\" DevicePath \"\"" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.685732 4798 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b71790a2-e390-400a-a288-2a3af8467047-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.786803 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-var-lib-openvswitch\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.786909 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-node-log\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.786928 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-var-lib-openvswitch\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.786950 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.787009 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.787026 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5621abfc-74ab-4b61-a1c4-c90456171d20-ovnkube-config\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.787091 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-host-run-netns\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.787146 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-log-socket\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.787163 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-host-run-netns\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.787205 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-host-kubelet\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.787254 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-host-kubelet\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.787080 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-node-log\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.787297 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-log-socket\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.787307 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5621abfc-74ab-4b61-a1c4-c90456171d20-env-overrides\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.787423 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-etc-openvswitch\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.787468 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5621abfc-74ab-4b61-a1c4-c90456171d20-ovn-node-metrics-cert\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.787501 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5621abfc-74ab-4b61-a1c4-c90456171d20-ovnkube-script-lib\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.787504 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-etc-openvswitch\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.787521 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-host-cni-netd\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.787562 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-host-slash\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.787579 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-systemd-units\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.787595 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-run-openvswitch\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.787620 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-run-systemd\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.787644 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-host-cni-bin\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.787685 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b6k9\" (UniqueName: \"kubernetes.io/projected/5621abfc-74ab-4b61-a1c4-c90456171d20-kube-api-access-4b6k9\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.787684 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-host-slash\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.787717 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-run-ovn\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.787736 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-host-run-ovn-kubernetes\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.787753 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-run-systemd\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.787787 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-host-cni-netd\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.787828 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-host-run-ovn-kubernetes\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.787809 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-run-openvswitch\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.787817 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5621abfc-74ab-4b61-a1c4-c90456171d20-ovnkube-config\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.787811 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-run-ovn\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.787843 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-host-cni-bin\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.787888 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5621abfc-74ab-4b61-a1c4-c90456171d20-systemd-units\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.788922 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5621abfc-74ab-4b61-a1c4-c90456171d20-ovnkube-script-lib\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.789181 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5621abfc-74ab-4b61-a1c4-c90456171d20-env-overrides\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.796020 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5621abfc-74ab-4b61-a1c4-c90456171d20-ovn-node-metrics-cert\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.809096 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b6k9\" (UniqueName: \"kubernetes.io/projected/5621abfc-74ab-4b61-a1c4-c90456171d20-kube-api-access-4b6k9\") pod \"ovnkube-node-kx2zk\" (UID: \"5621abfc-74ab-4b61-a1c4-c90456171d20\") " pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.867050 4798 patch_prober.go:28] interesting pod/machine-config-daemon-b842j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.867148 4798 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b842j" podUID="c6602c86-f236-4772-b70f-a8b4847b95dd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.867211 4798 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b842j" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.868091 4798 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0f03b0766898d960db7dfb7d118abdc25b38ac734616e34343665c5d2613c7bb"} pod="openshift-machine-config-operator/machine-config-daemon-b842j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.868198 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b842j" podUID="c6602c86-f236-4772-b70f-a8b4847b95dd" containerName="machine-config-daemon" containerID="cri-o://0f03b0766898d960db7dfb7d118abdc25b38ac734616e34343665c5d2613c7bb" gracePeriod=600 Feb 03 00:25:43 crc kubenswrapper[4798]: I0203 00:25:43.868951 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:43 crc kubenswrapper[4798]: W0203 00:25:43.902440 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5621abfc_74ab_4b61_a1c4_c90456171d20.slice/crio-41736f026360e0bd4d18da2208d955a74d942c5204010cc8035baf2a7bfb293c WatchSource:0}: Error finding container 41736f026360e0bd4d18da2208d955a74d942c5204010cc8035baf2a7bfb293c: Status 404 returned error can't find the container with id 41736f026360e0bd4d18da2208d955a74d942c5204010cc8035baf2a7bfb293c Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.254856 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ktf4c_106da5aa-5f2e-4d32-b172-4844ad6de7f6/kube-multus/2.log" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.257739 4798 generic.go:334] "Generic (PLEG): container finished" podID="c6602c86-f236-4772-b70f-a8b4847b95dd" containerID="0f03b0766898d960db7dfb7d118abdc25b38ac734616e34343665c5d2613c7bb" exitCode=0 Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.257823 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" event={"ID":"c6602c86-f236-4772-b70f-a8b4847b95dd","Type":"ContainerDied","Data":"0f03b0766898d960db7dfb7d118abdc25b38ac734616e34343665c5d2613c7bb"} Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.257873 4798 scope.go:117] "RemoveContainer" containerID="e0abb4fba18cc39f2fad2bd7c8dc346fe38eaaf9c13b0911c25b8caac9f640e9" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.259092 4798 generic.go:334] "Generic (PLEG): container finished" podID="5621abfc-74ab-4b61-a1c4-c90456171d20" containerID="46ce377bc936c8978cc8d25652b7958e44c8aa72bd932afc5155ff351819ce06" exitCode=0 Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.259132 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" event={"ID":"5621abfc-74ab-4b61-a1c4-c90456171d20","Type":"ContainerDied","Data":"46ce377bc936c8978cc8d25652b7958e44c8aa72bd932afc5155ff351819ce06"} Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.259185 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" event={"ID":"5621abfc-74ab-4b61-a1c4-c90456171d20","Type":"ContainerStarted","Data":"41736f026360e0bd4d18da2208d955a74d942c5204010cc8035baf2a7bfb293c"} Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.265832 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gzlj4_b71790a2-e390-400a-a288-2a3af8467047/ovnkube-controller/3.log" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.267632 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gzlj4_b71790a2-e390-400a-a288-2a3af8467047/ovn-acl-logging/0.log" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.268018 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gzlj4_b71790a2-e390-400a-a288-2a3af8467047/ovn-controller/0.log" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.268264 4798 generic.go:334] "Generic (PLEG): container finished" podID="b71790a2-e390-400a-a288-2a3af8467047" containerID="12ce9d144d65989d5f9710264f741702fa932ce66fb41cc0b22042ea7ed76018" exitCode=0 Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.268303 4798 generic.go:334] "Generic (PLEG): container finished" podID="b71790a2-e390-400a-a288-2a3af8467047" containerID="5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019" exitCode=0 Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.268311 4798 generic.go:334] "Generic (PLEG): container finished" podID="b71790a2-e390-400a-a288-2a3af8467047" containerID="f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848" exitCode=0 Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.268318 4798 generic.go:334] "Generic (PLEG): container finished" podID="b71790a2-e390-400a-a288-2a3af8467047" containerID="b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8" exitCode=0 Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.268316 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" event={"ID":"b71790a2-e390-400a-a288-2a3af8467047","Type":"ContainerDied","Data":"12ce9d144d65989d5f9710264f741702fa932ce66fb41cc0b22042ea7ed76018"} Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.268365 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.268371 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" event={"ID":"b71790a2-e390-400a-a288-2a3af8467047","Type":"ContainerDied","Data":"5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019"} Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.269685 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" event={"ID":"b71790a2-e390-400a-a288-2a3af8467047","Type":"ContainerDied","Data":"f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848"} Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.269707 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" event={"ID":"b71790a2-e390-400a-a288-2a3af8467047","Type":"ContainerDied","Data":"b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8"} Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.269723 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzlj4" event={"ID":"b71790a2-e390-400a-a288-2a3af8467047","Type":"ContainerDied","Data":"d23292102a8b196adcc307379dafc7b34e895ac7da120126e2860071d4026509"} Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.284059 4798 scope.go:117] "RemoveContainer" containerID="12ce9d144d65989d5f9710264f741702fa932ce66fb41cc0b22042ea7ed76018" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.303523 4798 scope.go:117] "RemoveContainer" containerID="543ce215587d0745d59fb1c3a87d4fd86d500257ac93b2b61d7cec602dd61102" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.333797 4798 scope.go:117] "RemoveContainer" containerID="5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.370277 4798 scope.go:117] "RemoveContainer" containerID="f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.370887 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gzlj4"] Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.377567 4798 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gzlj4"] Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.387183 4798 scope.go:117] "RemoveContainer" containerID="b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.412716 4798 scope.go:117] "RemoveContainer" containerID="4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.432921 4798 scope.go:117] "RemoveContainer" containerID="f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.450479 4798 scope.go:117] "RemoveContainer" containerID="993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.474997 4798 scope.go:117] "RemoveContainer" containerID="ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.487512 4798 scope.go:117] "RemoveContainer" containerID="ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.513206 4798 scope.go:117] "RemoveContainer" containerID="12ce9d144d65989d5f9710264f741702fa932ce66fb41cc0b22042ea7ed76018" Feb 03 00:25:44 crc kubenswrapper[4798]: E0203 00:25:44.513731 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12ce9d144d65989d5f9710264f741702fa932ce66fb41cc0b22042ea7ed76018\": container with ID starting with 12ce9d144d65989d5f9710264f741702fa932ce66fb41cc0b22042ea7ed76018 not found: ID does not exist" containerID="12ce9d144d65989d5f9710264f741702fa932ce66fb41cc0b22042ea7ed76018" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.513763 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12ce9d144d65989d5f9710264f741702fa932ce66fb41cc0b22042ea7ed76018"} err="failed to get container status \"12ce9d144d65989d5f9710264f741702fa932ce66fb41cc0b22042ea7ed76018\": rpc error: code = NotFound desc = could not find container \"12ce9d144d65989d5f9710264f741702fa932ce66fb41cc0b22042ea7ed76018\": container with ID starting with 12ce9d144d65989d5f9710264f741702fa932ce66fb41cc0b22042ea7ed76018 not found: ID does not exist" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.513783 4798 scope.go:117] "RemoveContainer" containerID="543ce215587d0745d59fb1c3a87d4fd86d500257ac93b2b61d7cec602dd61102" Feb 03 00:25:44 crc kubenswrapper[4798]: E0203 00:25:44.514071 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"543ce215587d0745d59fb1c3a87d4fd86d500257ac93b2b61d7cec602dd61102\": container with ID starting with 543ce215587d0745d59fb1c3a87d4fd86d500257ac93b2b61d7cec602dd61102 not found: ID does not exist" containerID="543ce215587d0745d59fb1c3a87d4fd86d500257ac93b2b61d7cec602dd61102" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.514091 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"543ce215587d0745d59fb1c3a87d4fd86d500257ac93b2b61d7cec602dd61102"} err="failed to get container status \"543ce215587d0745d59fb1c3a87d4fd86d500257ac93b2b61d7cec602dd61102\": rpc error: code = NotFound desc = could not find container \"543ce215587d0745d59fb1c3a87d4fd86d500257ac93b2b61d7cec602dd61102\": container with ID starting with 543ce215587d0745d59fb1c3a87d4fd86d500257ac93b2b61d7cec602dd61102 not found: ID does not exist" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.514105 4798 scope.go:117] "RemoveContainer" containerID="5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019" Feb 03 00:25:44 crc kubenswrapper[4798]: E0203 00:25:44.514317 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019\": container with ID starting with 5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019 not found: ID does not exist" containerID="5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.514337 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019"} err="failed to get container status \"5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019\": rpc error: code = NotFound desc = could not find container \"5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019\": container with ID starting with 5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019 not found: ID does not exist" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.514350 4798 scope.go:117] "RemoveContainer" containerID="f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848" Feb 03 00:25:44 crc kubenswrapper[4798]: E0203 00:25:44.514677 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848\": container with ID starting with f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848 not found: ID does not exist" containerID="f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.514770 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848"} err="failed to get container status \"f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848\": rpc error: code = NotFound desc = could not find container \"f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848\": container with ID starting with f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848 not found: ID does not exist" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.514805 4798 scope.go:117] "RemoveContainer" containerID="b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8" Feb 03 00:25:44 crc kubenswrapper[4798]: E0203 00:25:44.515379 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8\": container with ID starting with b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8 not found: ID does not exist" containerID="b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.515410 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8"} err="failed to get container status \"b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8\": rpc error: code = NotFound desc = could not find container \"b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8\": container with ID starting with b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8 not found: ID does not exist" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.515431 4798 scope.go:117] "RemoveContainer" containerID="4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34" Feb 03 00:25:44 crc kubenswrapper[4798]: E0203 00:25:44.515716 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34\": container with ID starting with 4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34 not found: ID does not exist" containerID="4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.515744 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34"} err="failed to get container status \"4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34\": rpc error: code = NotFound desc = could not find container \"4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34\": container with ID starting with 4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34 not found: ID does not exist" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.515760 4798 scope.go:117] "RemoveContainer" containerID="f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd" Feb 03 00:25:44 crc kubenswrapper[4798]: E0203 00:25:44.516191 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd\": container with ID starting with f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd not found: ID does not exist" containerID="f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.516219 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd"} err="failed to get container status \"f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd\": rpc error: code = NotFound desc = could not find container \"f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd\": container with ID starting with f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd not found: ID does not exist" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.516237 4798 scope.go:117] "RemoveContainer" containerID="993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3" Feb 03 00:25:44 crc kubenswrapper[4798]: E0203 00:25:44.516476 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3\": container with ID starting with 993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3 not found: ID does not exist" containerID="993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.516512 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3"} err="failed to get container status \"993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3\": rpc error: code = NotFound desc = could not find container \"993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3\": container with ID starting with 993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3 not found: ID does not exist" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.516535 4798 scope.go:117] "RemoveContainer" containerID="ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5" Feb 03 00:25:44 crc kubenswrapper[4798]: E0203 00:25:44.516761 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5\": container with ID starting with ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5 not found: ID does not exist" containerID="ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.516795 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5"} err="failed to get container status \"ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5\": rpc error: code = NotFound desc = could not find container \"ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5\": container with ID starting with ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5 not found: ID does not exist" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.516814 4798 scope.go:117] "RemoveContainer" containerID="ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e" Feb 03 00:25:44 crc kubenswrapper[4798]: E0203 00:25:44.517230 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\": container with ID starting with ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e not found: ID does not exist" containerID="ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.517262 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e"} err="failed to get container status \"ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\": rpc error: code = NotFound desc = could not find container \"ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\": container with ID starting with ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e not found: ID does not exist" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.517282 4798 scope.go:117] "RemoveContainer" containerID="12ce9d144d65989d5f9710264f741702fa932ce66fb41cc0b22042ea7ed76018" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.517565 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12ce9d144d65989d5f9710264f741702fa932ce66fb41cc0b22042ea7ed76018"} err="failed to get container status \"12ce9d144d65989d5f9710264f741702fa932ce66fb41cc0b22042ea7ed76018\": rpc error: code = NotFound desc = could not find container \"12ce9d144d65989d5f9710264f741702fa932ce66fb41cc0b22042ea7ed76018\": container with ID starting with 12ce9d144d65989d5f9710264f741702fa932ce66fb41cc0b22042ea7ed76018 not found: ID does not exist" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.517590 4798 scope.go:117] "RemoveContainer" containerID="543ce215587d0745d59fb1c3a87d4fd86d500257ac93b2b61d7cec602dd61102" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.517913 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"543ce215587d0745d59fb1c3a87d4fd86d500257ac93b2b61d7cec602dd61102"} err="failed to get container status \"543ce215587d0745d59fb1c3a87d4fd86d500257ac93b2b61d7cec602dd61102\": rpc error: code = NotFound desc = could not find container \"543ce215587d0745d59fb1c3a87d4fd86d500257ac93b2b61d7cec602dd61102\": container with ID starting with 543ce215587d0745d59fb1c3a87d4fd86d500257ac93b2b61d7cec602dd61102 not found: ID does not exist" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.517938 4798 scope.go:117] "RemoveContainer" containerID="5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.518480 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019"} err="failed to get container status \"5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019\": rpc error: code = NotFound desc = could not find container \"5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019\": container with ID starting with 5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019 not found: ID does not exist" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.518509 4798 scope.go:117] "RemoveContainer" containerID="f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.518762 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848"} err="failed to get container status \"f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848\": rpc error: code = NotFound desc = could not find container \"f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848\": container with ID starting with f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848 not found: ID does not exist" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.518788 4798 scope.go:117] "RemoveContainer" containerID="b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.518994 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8"} err="failed to get container status \"b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8\": rpc error: code = NotFound desc = could not find container \"b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8\": container with ID starting with b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8 not found: ID does not exist" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.519018 4798 scope.go:117] "RemoveContainer" containerID="4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.519303 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34"} err="failed to get container status \"4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34\": rpc error: code = NotFound desc = could not find container \"4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34\": container with ID starting with 4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34 not found: ID does not exist" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.519329 4798 scope.go:117] "RemoveContainer" containerID="f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.519580 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd"} err="failed to get container status \"f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd\": rpc error: code = NotFound desc = could not find container \"f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd\": container with ID starting with f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd not found: ID does not exist" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.519609 4798 scope.go:117] "RemoveContainer" containerID="993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.519916 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3"} err="failed to get container status \"993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3\": rpc error: code = NotFound desc = could not find container \"993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3\": container with ID starting with 993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3 not found: ID does not exist" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.519941 4798 scope.go:117] "RemoveContainer" containerID="ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.520158 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5"} err="failed to get container status \"ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5\": rpc error: code = NotFound desc = could not find container \"ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5\": container with ID starting with ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5 not found: ID does not exist" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.520176 4798 scope.go:117] "RemoveContainer" containerID="ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.520461 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e"} err="failed to get container status \"ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\": rpc error: code = NotFound desc = could not find container \"ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\": container with ID starting with ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e not found: ID does not exist" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.520488 4798 scope.go:117] "RemoveContainer" containerID="12ce9d144d65989d5f9710264f741702fa932ce66fb41cc0b22042ea7ed76018" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.520708 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12ce9d144d65989d5f9710264f741702fa932ce66fb41cc0b22042ea7ed76018"} err="failed to get container status \"12ce9d144d65989d5f9710264f741702fa932ce66fb41cc0b22042ea7ed76018\": rpc error: code = NotFound desc = could not find container \"12ce9d144d65989d5f9710264f741702fa932ce66fb41cc0b22042ea7ed76018\": container with ID starting with 12ce9d144d65989d5f9710264f741702fa932ce66fb41cc0b22042ea7ed76018 not found: ID does not exist" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.520730 4798 scope.go:117] "RemoveContainer" containerID="543ce215587d0745d59fb1c3a87d4fd86d500257ac93b2b61d7cec602dd61102" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.520962 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"543ce215587d0745d59fb1c3a87d4fd86d500257ac93b2b61d7cec602dd61102"} err="failed to get container status \"543ce215587d0745d59fb1c3a87d4fd86d500257ac93b2b61d7cec602dd61102\": rpc error: code = NotFound desc = could not find container \"543ce215587d0745d59fb1c3a87d4fd86d500257ac93b2b61d7cec602dd61102\": container with ID starting with 543ce215587d0745d59fb1c3a87d4fd86d500257ac93b2b61d7cec602dd61102 not found: ID does not exist" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.520982 4798 scope.go:117] "RemoveContainer" containerID="5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.521271 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019"} err="failed to get container status \"5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019\": rpc error: code = NotFound desc = could not find container \"5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019\": container with ID starting with 5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019 not found: ID does not exist" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.521298 4798 scope.go:117] "RemoveContainer" containerID="f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.521493 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848"} err="failed to get container status \"f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848\": rpc error: code = NotFound desc = could not find container \"f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848\": container with ID starting with f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848 not found: ID does not exist" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.521512 4798 scope.go:117] "RemoveContainer" containerID="b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.521738 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8"} err="failed to get container status \"b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8\": rpc error: code = NotFound desc = could not find container \"b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8\": container with ID starting with b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8 not found: ID does not exist" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.521761 4798 scope.go:117] "RemoveContainer" containerID="4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.521954 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34"} err="failed to get container status \"4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34\": rpc error: code = NotFound desc = could not find container \"4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34\": container with ID starting with 4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34 not found: ID does not exist" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.521976 4798 scope.go:117] "RemoveContainer" containerID="f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.522148 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd"} err="failed to get container status \"f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd\": rpc error: code = NotFound desc = could not find container \"f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd\": container with ID starting with f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd not found: ID does not exist" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.522169 4798 scope.go:117] "RemoveContainer" containerID="993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.522348 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3"} err="failed to get container status \"993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3\": rpc error: code = NotFound desc = could not find container \"993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3\": container with ID starting with 993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3 not found: ID does not exist" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.522369 4798 scope.go:117] "RemoveContainer" containerID="ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.522778 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5"} err="failed to get container status \"ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5\": rpc error: code = NotFound desc = could not find container \"ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5\": container with ID starting with ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5 not found: ID does not exist" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.522803 4798 scope.go:117] "RemoveContainer" containerID="ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.523083 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e"} err="failed to get container status \"ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\": rpc error: code = NotFound desc = could not find container \"ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\": container with ID starting with ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e not found: ID does not exist" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.523128 4798 scope.go:117] "RemoveContainer" containerID="12ce9d144d65989d5f9710264f741702fa932ce66fb41cc0b22042ea7ed76018" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.523393 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12ce9d144d65989d5f9710264f741702fa932ce66fb41cc0b22042ea7ed76018"} err="failed to get container status \"12ce9d144d65989d5f9710264f741702fa932ce66fb41cc0b22042ea7ed76018\": rpc error: code = NotFound desc = could not find container \"12ce9d144d65989d5f9710264f741702fa932ce66fb41cc0b22042ea7ed76018\": container with ID starting with 12ce9d144d65989d5f9710264f741702fa932ce66fb41cc0b22042ea7ed76018 not found: ID does not exist" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.523413 4798 scope.go:117] "RemoveContainer" containerID="543ce215587d0745d59fb1c3a87d4fd86d500257ac93b2b61d7cec602dd61102" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.523627 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"543ce215587d0745d59fb1c3a87d4fd86d500257ac93b2b61d7cec602dd61102"} err="failed to get container status \"543ce215587d0745d59fb1c3a87d4fd86d500257ac93b2b61d7cec602dd61102\": rpc error: code = NotFound desc = could not find container \"543ce215587d0745d59fb1c3a87d4fd86d500257ac93b2b61d7cec602dd61102\": container with ID starting with 543ce215587d0745d59fb1c3a87d4fd86d500257ac93b2b61d7cec602dd61102 not found: ID does not exist" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.523674 4798 scope.go:117] "RemoveContainer" containerID="5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.523964 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019"} err="failed to get container status \"5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019\": rpc error: code = NotFound desc = could not find container \"5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019\": container with ID starting with 5ee06775598e60de2da34351058dff6eaab6dc6bc414eb1c77ba0e8be3416019 not found: ID does not exist" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.523984 4798 scope.go:117] "RemoveContainer" containerID="f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.524182 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848"} err="failed to get container status \"f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848\": rpc error: code = NotFound desc = could not find container \"f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848\": container with ID starting with f83c3f7e3158aeec29b8c4edaa7894bd43a1763d113e068b25684084de7a1848 not found: ID does not exist" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.524201 4798 scope.go:117] "RemoveContainer" containerID="b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.524389 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8"} err="failed to get container status \"b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8\": rpc error: code = NotFound desc = could not find container \"b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8\": container with ID starting with b8a6a6072e84436d05367c4338db0ce8b3bacc67d4b912679e849bb882d120d8 not found: ID does not exist" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.524409 4798 scope.go:117] "RemoveContainer" containerID="4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.524790 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34"} err="failed to get container status \"4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34\": rpc error: code = NotFound desc = could not find container \"4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34\": container with ID starting with 4ba0102c263548b70714e90db7d0c70ef80e0a6df78392e43e962a6858442f34 not found: ID does not exist" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.524810 4798 scope.go:117] "RemoveContainer" containerID="f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.525036 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd"} err="failed to get container status \"f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd\": rpc error: code = NotFound desc = could not find container \"f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd\": container with ID starting with f0a053da305034aca35febde83f9dba824750d4f0e28d43680844df7d188d1cd not found: ID does not exist" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.525053 4798 scope.go:117] "RemoveContainer" containerID="993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.525333 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3"} err="failed to get container status \"993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3\": rpc error: code = NotFound desc = could not find container \"993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3\": container with ID starting with 993c6fb3bd7434975f3e09fd131fcbc093b45d6cc95712fdfc951aa5245147f3 not found: ID does not exist" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.525372 4798 scope.go:117] "RemoveContainer" containerID="ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.526872 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5"} err="failed to get container status \"ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5\": rpc error: code = NotFound desc = could not find container \"ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5\": container with ID starting with ccd33ef0370f09c2062a07fc8235b3e366bf2dace0417969c7cf94c30dcb6ef5 not found: ID does not exist" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.526900 4798 scope.go:117] "RemoveContainer" containerID="ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.527164 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e"} err="failed to get container status \"ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\": rpc error: code = NotFound desc = could not find container \"ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e\": container with ID starting with ee264a53d624f704fd72f18f26f6ede02ec69d3121764438f023e5cb8aab821e not found: ID does not exist" Feb 03 00:25:44 crc kubenswrapper[4798]: I0203 00:25:44.916184 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b71790a2-e390-400a-a288-2a3af8467047" path="/var/lib/kubelet/pods/b71790a2-e390-400a-a288-2a3af8467047/volumes" Feb 03 00:25:45 crc kubenswrapper[4798]: I0203 00:25:45.278729 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" event={"ID":"5621abfc-74ab-4b61-a1c4-c90456171d20","Type":"ContainerStarted","Data":"60f9bb3c2da60d1582988064f176e4f4eb6cd6ace7ee03dde1db5fe4f9b0bf9b"} Feb 03 00:25:45 crc kubenswrapper[4798]: I0203 00:25:45.278777 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" event={"ID":"5621abfc-74ab-4b61-a1c4-c90456171d20","Type":"ContainerStarted","Data":"3c91acde6e4fe665a0e970e36f993b3c8bfb3314ceaa2a90d017e137a167cf19"} Feb 03 00:25:45 crc kubenswrapper[4798]: I0203 00:25:45.278793 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" event={"ID":"5621abfc-74ab-4b61-a1c4-c90456171d20","Type":"ContainerStarted","Data":"649de989328f6e2192e536f76f92ef4812d8dbbbda66e1197dff16fa6035a54b"} Feb 03 00:25:45 crc kubenswrapper[4798]: I0203 00:25:45.278805 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" event={"ID":"5621abfc-74ab-4b61-a1c4-c90456171d20","Type":"ContainerStarted","Data":"2c1579eea77847723ee43bb7ebde4ab241b615132c7362e153e885e851ed2627"} Feb 03 00:25:45 crc kubenswrapper[4798]: I0203 00:25:45.278815 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" event={"ID":"5621abfc-74ab-4b61-a1c4-c90456171d20","Type":"ContainerStarted","Data":"aa61b8ffe9bc19cd392e214fd4b8489e593fe1cf3e4f91456378f1991857c045"} Feb 03 00:25:45 crc kubenswrapper[4798]: I0203 00:25:45.278825 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" event={"ID":"5621abfc-74ab-4b61-a1c4-c90456171d20","Type":"ContainerStarted","Data":"7f761668171d5c2d6413230258146d858c1de85d7a21f0a66d1f039f00fd9379"} Feb 03 00:25:45 crc kubenswrapper[4798]: I0203 00:25:45.281986 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" event={"ID":"c6602c86-f236-4772-b70f-a8b4847b95dd","Type":"ContainerStarted","Data":"6897fa3c91ba906d1e45e4ba0d97338c3f5515d0f2babad3be934a41248b6f2b"} Feb 03 00:25:47 crc kubenswrapper[4798]: I0203 00:25:47.302637 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" event={"ID":"5621abfc-74ab-4b61-a1c4-c90456171d20","Type":"ContainerStarted","Data":"a26c66eb8d4b942e979e8072ba5d3e4a78c9532f0b21534d92ec24a67d6c1390"} Feb 03 00:25:50 crc kubenswrapper[4798]: I0203 00:25:50.324105 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" event={"ID":"5621abfc-74ab-4b61-a1c4-c90456171d20","Type":"ContainerStarted","Data":"b38e4d553f5e9fe28c421baec24b228fa266e2a397bd2e50817d6cdefbcb5b1e"} Feb 03 00:25:50 crc kubenswrapper[4798]: I0203 00:25:50.324723 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:50 crc kubenswrapper[4798]: I0203 00:25:50.324741 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:50 crc kubenswrapper[4798]: I0203 00:25:50.353453 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" podStartSLOduration=7.353438031 podStartE2EDuration="7.353438031s" podCreationTimestamp="2026-02-03 00:25:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:25:50.3481112 +0000 UTC m=+642.114101221" watchObservedRunningTime="2026-02-03 00:25:50.353438031 +0000 UTC m=+642.119428042" Feb 03 00:25:50 crc kubenswrapper[4798]: I0203 00:25:50.358012 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:51 crc kubenswrapper[4798]: I0203 00:25:51.330343 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:51 crc kubenswrapper[4798]: I0203 00:25:51.367296 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:25:53 crc kubenswrapper[4798]: I0203 00:25:53.908326 4798 scope.go:117] "RemoveContainer" containerID="3f41354d4d52290d36062ee17c80b961a674625ae7ecc249447b0a9b6b9b2415" Feb 03 00:25:53 crc kubenswrapper[4798]: E0203 00:25:53.908540 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-ktf4c_openshift-multus(106da5aa-5f2e-4d32-b172-4844ad6de7f6)\"" pod="openshift-multus/multus-ktf4c" podUID="106da5aa-5f2e-4d32-b172-4844ad6de7f6" Feb 03 00:26:08 crc kubenswrapper[4798]: I0203 00:26:08.915360 4798 scope.go:117] "RemoveContainer" containerID="3f41354d4d52290d36062ee17c80b961a674625ae7ecc249447b0a9b6b9b2415" Feb 03 00:26:09 crc kubenswrapper[4798]: I0203 00:26:09.438677 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ktf4c_106da5aa-5f2e-4d32-b172-4844ad6de7f6/kube-multus/2.log" Feb 03 00:26:09 crc kubenswrapper[4798]: I0203 00:26:09.438973 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ktf4c" event={"ID":"106da5aa-5f2e-4d32-b172-4844ad6de7f6","Type":"ContainerStarted","Data":"72c9a037cba9ec11aa9c610487b2b33bc32e73f5ea3ad9444f7a62262501783c"} Feb 03 00:26:13 crc kubenswrapper[4798]: I0203 00:26:13.904516 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kx2zk" Feb 03 00:26:45 crc kubenswrapper[4798]: I0203 00:26:45.416928 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdvvb"] Feb 03 00:26:45 crc kubenswrapper[4798]: I0203 00:26:45.419095 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qdvvb" podUID="d8def1db-9ef1-4ab9-a700-2cc12f4eecb8" containerName="registry-server" containerID="cri-o://46d8d467a0f228599115e6b4ff105404c9aaaea39bf252a59b89dde5032359f6" gracePeriod=30 Feb 03 00:26:45 crc kubenswrapper[4798]: I0203 00:26:45.651952 4798 generic.go:334] "Generic (PLEG): container finished" podID="d8def1db-9ef1-4ab9-a700-2cc12f4eecb8" containerID="46d8d467a0f228599115e6b4ff105404c9aaaea39bf252a59b89dde5032359f6" exitCode=0 Feb 03 00:26:45 crc kubenswrapper[4798]: I0203 00:26:45.651973 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdvvb" event={"ID":"d8def1db-9ef1-4ab9-a700-2cc12f4eecb8","Type":"ContainerDied","Data":"46d8d467a0f228599115e6b4ff105404c9aaaea39bf252a59b89dde5032359f6"} Feb 03 00:26:45 crc kubenswrapper[4798]: I0203 00:26:45.764956 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qdvvb" Feb 03 00:26:45 crc kubenswrapper[4798]: I0203 00:26:45.823172 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqtsw\" (UniqueName: \"kubernetes.io/projected/d8def1db-9ef1-4ab9-a700-2cc12f4eecb8-kube-api-access-nqtsw\") pod \"d8def1db-9ef1-4ab9-a700-2cc12f4eecb8\" (UID: \"d8def1db-9ef1-4ab9-a700-2cc12f4eecb8\") " Feb 03 00:26:45 crc kubenswrapper[4798]: I0203 00:26:45.823243 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8def1db-9ef1-4ab9-a700-2cc12f4eecb8-utilities\") pod \"d8def1db-9ef1-4ab9-a700-2cc12f4eecb8\" (UID: \"d8def1db-9ef1-4ab9-a700-2cc12f4eecb8\") " Feb 03 00:26:45 crc kubenswrapper[4798]: I0203 00:26:45.823287 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8def1db-9ef1-4ab9-a700-2cc12f4eecb8-catalog-content\") pod \"d8def1db-9ef1-4ab9-a700-2cc12f4eecb8\" (UID: \"d8def1db-9ef1-4ab9-a700-2cc12f4eecb8\") " Feb 03 00:26:45 crc kubenswrapper[4798]: I0203 00:26:45.824257 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8def1db-9ef1-4ab9-a700-2cc12f4eecb8-utilities" (OuterVolumeSpecName: "utilities") pod "d8def1db-9ef1-4ab9-a700-2cc12f4eecb8" (UID: "d8def1db-9ef1-4ab9-a700-2cc12f4eecb8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:26:45 crc kubenswrapper[4798]: I0203 00:26:45.829261 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8def1db-9ef1-4ab9-a700-2cc12f4eecb8-kube-api-access-nqtsw" (OuterVolumeSpecName: "kube-api-access-nqtsw") pod "d8def1db-9ef1-4ab9-a700-2cc12f4eecb8" (UID: "d8def1db-9ef1-4ab9-a700-2cc12f4eecb8"). InnerVolumeSpecName "kube-api-access-nqtsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:26:45 crc kubenswrapper[4798]: I0203 00:26:45.847572 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8def1db-9ef1-4ab9-a700-2cc12f4eecb8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d8def1db-9ef1-4ab9-a700-2cc12f4eecb8" (UID: "d8def1db-9ef1-4ab9-a700-2cc12f4eecb8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:26:45 crc kubenswrapper[4798]: I0203 00:26:45.924540 4798 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8def1db-9ef1-4ab9-a700-2cc12f4eecb8-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 00:26:45 crc kubenswrapper[4798]: I0203 00:26:45.924572 4798 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8def1db-9ef1-4ab9-a700-2cc12f4eecb8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 00:26:45 crc kubenswrapper[4798]: I0203 00:26:45.924584 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqtsw\" (UniqueName: \"kubernetes.io/projected/d8def1db-9ef1-4ab9-a700-2cc12f4eecb8-kube-api-access-nqtsw\") on node \"crc\" DevicePath \"\"" Feb 03 00:26:46 crc kubenswrapper[4798]: I0203 00:26:46.661440 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdvvb" event={"ID":"d8def1db-9ef1-4ab9-a700-2cc12f4eecb8","Type":"ContainerDied","Data":"fe89afb57c31525aa0a91b6d9d94b1deb9c75b295ce6b9ca35b75f60d8d97531"} Feb 03 00:26:46 crc kubenswrapper[4798]: I0203 00:26:46.661488 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qdvvb" Feb 03 00:26:46 crc kubenswrapper[4798]: I0203 00:26:46.661509 4798 scope.go:117] "RemoveContainer" containerID="46d8d467a0f228599115e6b4ff105404c9aaaea39bf252a59b89dde5032359f6" Feb 03 00:26:46 crc kubenswrapper[4798]: I0203 00:26:46.684024 4798 scope.go:117] "RemoveContainer" containerID="0ab6458df6d5b64bb4af526e242d00435955e991835dd873ebf67432185a57a9" Feb 03 00:26:46 crc kubenswrapper[4798]: I0203 00:26:46.699435 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdvvb"] Feb 03 00:26:46 crc kubenswrapper[4798]: I0203 00:26:46.702084 4798 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdvvb"] Feb 03 00:26:46 crc kubenswrapper[4798]: I0203 00:26:46.703300 4798 scope.go:117] "RemoveContainer" containerID="f5890970047ce63a42dc79cfa89fafeb042f484b41acd3992b3fbd6a41ffcdc9" Feb 03 00:26:46 crc kubenswrapper[4798]: I0203 00:26:46.915299 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8def1db-9ef1-4ab9-a700-2cc12f4eecb8" path="/var/lib/kubelet/pods/d8def1db-9ef1-4ab9-a700-2cc12f4eecb8/volumes" Feb 03 00:26:49 crc kubenswrapper[4798]: I0203 00:26:49.094975 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08b7l4l"] Feb 03 00:26:49 crc kubenswrapper[4798]: E0203 00:26:49.095493 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8def1db-9ef1-4ab9-a700-2cc12f4eecb8" containerName="extract-content" Feb 03 00:26:49 crc kubenswrapper[4798]: I0203 00:26:49.095509 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8def1db-9ef1-4ab9-a700-2cc12f4eecb8" containerName="extract-content" Feb 03 00:26:49 crc kubenswrapper[4798]: E0203 00:26:49.095526 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8def1db-9ef1-4ab9-a700-2cc12f4eecb8" containerName="extract-utilities" Feb 03 00:26:49 crc kubenswrapper[4798]: I0203 00:26:49.095534 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8def1db-9ef1-4ab9-a700-2cc12f4eecb8" containerName="extract-utilities" Feb 03 00:26:49 crc kubenswrapper[4798]: E0203 00:26:49.095548 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8def1db-9ef1-4ab9-a700-2cc12f4eecb8" containerName="registry-server" Feb 03 00:26:49 crc kubenswrapper[4798]: I0203 00:26:49.095556 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8def1db-9ef1-4ab9-a700-2cc12f4eecb8" containerName="registry-server" Feb 03 00:26:49 crc kubenswrapper[4798]: I0203 00:26:49.095701 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8def1db-9ef1-4ab9-a700-2cc12f4eecb8" containerName="registry-server" Feb 03 00:26:49 crc kubenswrapper[4798]: I0203 00:26:49.096565 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08b7l4l" Feb 03 00:26:49 crc kubenswrapper[4798]: I0203 00:26:49.098800 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 03 00:26:49 crc kubenswrapper[4798]: I0203 00:26:49.112163 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08b7l4l"] Feb 03 00:26:49 crc kubenswrapper[4798]: I0203 00:26:49.263538 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s5tv\" (UniqueName: \"kubernetes.io/projected/2c793aa4-86ea-41a2-875f-a944cfcb19b9-kube-api-access-8s5tv\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08b7l4l\" (UID: \"2c793aa4-86ea-41a2-875f-a944cfcb19b9\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08b7l4l" Feb 03 00:26:49 crc kubenswrapper[4798]: I0203 00:26:49.263647 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2c793aa4-86ea-41a2-875f-a944cfcb19b9-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08b7l4l\" (UID: \"2c793aa4-86ea-41a2-875f-a944cfcb19b9\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08b7l4l" Feb 03 00:26:49 crc kubenswrapper[4798]: I0203 00:26:49.263694 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2c793aa4-86ea-41a2-875f-a944cfcb19b9-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08b7l4l\" (UID: \"2c793aa4-86ea-41a2-875f-a944cfcb19b9\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08b7l4l" Feb 03 00:26:49 crc kubenswrapper[4798]: I0203 00:26:49.364624 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2c793aa4-86ea-41a2-875f-a944cfcb19b9-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08b7l4l\" (UID: \"2c793aa4-86ea-41a2-875f-a944cfcb19b9\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08b7l4l" Feb 03 00:26:49 crc kubenswrapper[4798]: I0203 00:26:49.364725 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s5tv\" (UniqueName: \"kubernetes.io/projected/2c793aa4-86ea-41a2-875f-a944cfcb19b9-kube-api-access-8s5tv\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08b7l4l\" (UID: \"2c793aa4-86ea-41a2-875f-a944cfcb19b9\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08b7l4l" Feb 03 00:26:49 crc kubenswrapper[4798]: I0203 00:26:49.364787 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2c793aa4-86ea-41a2-875f-a944cfcb19b9-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08b7l4l\" (UID: \"2c793aa4-86ea-41a2-875f-a944cfcb19b9\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08b7l4l" Feb 03 00:26:49 crc kubenswrapper[4798]: I0203 00:26:49.365067 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2c793aa4-86ea-41a2-875f-a944cfcb19b9-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08b7l4l\" (UID: \"2c793aa4-86ea-41a2-875f-a944cfcb19b9\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08b7l4l" Feb 03 00:26:49 crc kubenswrapper[4798]: I0203 00:26:49.365098 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2c793aa4-86ea-41a2-875f-a944cfcb19b9-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08b7l4l\" (UID: \"2c793aa4-86ea-41a2-875f-a944cfcb19b9\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08b7l4l" Feb 03 00:26:49 crc kubenswrapper[4798]: I0203 00:26:49.385072 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s5tv\" (UniqueName: \"kubernetes.io/projected/2c793aa4-86ea-41a2-875f-a944cfcb19b9-kube-api-access-8s5tv\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08b7l4l\" (UID: \"2c793aa4-86ea-41a2-875f-a944cfcb19b9\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08b7l4l" Feb 03 00:26:49 crc kubenswrapper[4798]: I0203 00:26:49.414367 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08b7l4l" Feb 03 00:26:49 crc kubenswrapper[4798]: I0203 00:26:49.656501 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08b7l4l"] Feb 03 00:26:49 crc kubenswrapper[4798]: I0203 00:26:49.687533 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08b7l4l" event={"ID":"2c793aa4-86ea-41a2-875f-a944cfcb19b9","Type":"ContainerStarted","Data":"7f51e8c4ca4504e81b6ecd2e09660e064c2df474354a7dd1bcbfe5c70e225d87"} Feb 03 00:26:50 crc kubenswrapper[4798]: I0203 00:26:50.694961 4798 generic.go:334] "Generic (PLEG): container finished" podID="2c793aa4-86ea-41a2-875f-a944cfcb19b9" containerID="9765d02a585c5d63f8141b5ce8959a5fef3e3a74d10a02089f21fc02725ad31e" exitCode=0 Feb 03 00:26:50 crc kubenswrapper[4798]: I0203 00:26:50.695027 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08b7l4l" event={"ID":"2c793aa4-86ea-41a2-875f-a944cfcb19b9","Type":"ContainerDied","Data":"9765d02a585c5d63f8141b5ce8959a5fef3e3a74d10a02089f21fc02725ad31e"} Feb 03 00:26:50 crc kubenswrapper[4798]: I0203 00:26:50.697403 4798 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 00:26:52 crc kubenswrapper[4798]: I0203 00:26:52.709601 4798 generic.go:334] "Generic (PLEG): container finished" podID="2c793aa4-86ea-41a2-875f-a944cfcb19b9" containerID="aeeafecba7ebd0f77c4176372bcfca3fefdccd07e6880aee9030554712735409" exitCode=0 Feb 03 00:26:52 crc kubenswrapper[4798]: I0203 00:26:52.709683 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08b7l4l" event={"ID":"2c793aa4-86ea-41a2-875f-a944cfcb19b9","Type":"ContainerDied","Data":"aeeafecba7ebd0f77c4176372bcfca3fefdccd07e6880aee9030554712735409"} Feb 03 00:26:53 crc kubenswrapper[4798]: I0203 00:26:53.718109 4798 generic.go:334] "Generic (PLEG): container finished" podID="2c793aa4-86ea-41a2-875f-a944cfcb19b9" containerID="3eac01b5cc54fb016b2e8ddcd2bf3bf5b745b10978d5ac180c302581800fb440" exitCode=0 Feb 03 00:26:53 crc kubenswrapper[4798]: I0203 00:26:53.718396 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08b7l4l" event={"ID":"2c793aa4-86ea-41a2-875f-a944cfcb19b9","Type":"ContainerDied","Data":"3eac01b5cc54fb016b2e8ddcd2bf3bf5b745b10978d5ac180c302581800fb440"} Feb 03 00:26:54 crc kubenswrapper[4798]: I0203 00:26:54.938396 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08b7l4l" Feb 03 00:26:55 crc kubenswrapper[4798]: I0203 00:26:55.132793 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2c793aa4-86ea-41a2-875f-a944cfcb19b9-util\") pod \"2c793aa4-86ea-41a2-875f-a944cfcb19b9\" (UID: \"2c793aa4-86ea-41a2-875f-a944cfcb19b9\") " Feb 03 00:26:55 crc kubenswrapper[4798]: I0203 00:26:55.132854 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8s5tv\" (UniqueName: \"kubernetes.io/projected/2c793aa4-86ea-41a2-875f-a944cfcb19b9-kube-api-access-8s5tv\") pod \"2c793aa4-86ea-41a2-875f-a944cfcb19b9\" (UID: \"2c793aa4-86ea-41a2-875f-a944cfcb19b9\") " Feb 03 00:26:55 crc kubenswrapper[4798]: I0203 00:26:55.132907 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2c793aa4-86ea-41a2-875f-a944cfcb19b9-bundle\") pod \"2c793aa4-86ea-41a2-875f-a944cfcb19b9\" (UID: \"2c793aa4-86ea-41a2-875f-a944cfcb19b9\") " Feb 03 00:26:55 crc kubenswrapper[4798]: I0203 00:26:55.134858 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c793aa4-86ea-41a2-875f-a944cfcb19b9-bundle" (OuterVolumeSpecName: "bundle") pod "2c793aa4-86ea-41a2-875f-a944cfcb19b9" (UID: "2c793aa4-86ea-41a2-875f-a944cfcb19b9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:26:55 crc kubenswrapper[4798]: I0203 00:26:55.137581 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c793aa4-86ea-41a2-875f-a944cfcb19b9-kube-api-access-8s5tv" (OuterVolumeSpecName: "kube-api-access-8s5tv") pod "2c793aa4-86ea-41a2-875f-a944cfcb19b9" (UID: "2c793aa4-86ea-41a2-875f-a944cfcb19b9"). InnerVolumeSpecName "kube-api-access-8s5tv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:26:55 crc kubenswrapper[4798]: I0203 00:26:55.234070 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8s5tv\" (UniqueName: \"kubernetes.io/projected/2c793aa4-86ea-41a2-875f-a944cfcb19b9-kube-api-access-8s5tv\") on node \"crc\" DevicePath \"\"" Feb 03 00:26:55 crc kubenswrapper[4798]: I0203 00:26:55.234331 4798 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2c793aa4-86ea-41a2-875f-a944cfcb19b9-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 00:26:55 crc kubenswrapper[4798]: I0203 00:26:55.380040 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c793aa4-86ea-41a2-875f-a944cfcb19b9-util" (OuterVolumeSpecName: "util") pod "2c793aa4-86ea-41a2-875f-a944cfcb19b9" (UID: "2c793aa4-86ea-41a2-875f-a944cfcb19b9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:26:55 crc kubenswrapper[4798]: I0203 00:26:55.436457 4798 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2c793aa4-86ea-41a2-875f-a944cfcb19b9-util\") on node \"crc\" DevicePath \"\"" Feb 03 00:26:55 crc kubenswrapper[4798]: I0203 00:26:55.733682 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08b7l4l" event={"ID":"2c793aa4-86ea-41a2-875f-a944cfcb19b9","Type":"ContainerDied","Data":"7f51e8c4ca4504e81b6ecd2e09660e064c2df474354a7dd1bcbfe5c70e225d87"} Feb 03 00:26:55 crc kubenswrapper[4798]: I0203 00:26:55.733749 4798 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f51e8c4ca4504e81b6ecd2e09660e064c2df474354a7dd1bcbfe5c70e225d87" Feb 03 00:26:55 crc kubenswrapper[4798]: I0203 00:26:55.733777 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08b7l4l" Feb 03 00:26:56 crc kubenswrapper[4798]: I0203 00:26:56.242724 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqfpwd"] Feb 03 00:26:56 crc kubenswrapper[4798]: E0203 00:26:56.242986 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c793aa4-86ea-41a2-875f-a944cfcb19b9" containerName="extract" Feb 03 00:26:56 crc kubenswrapper[4798]: I0203 00:26:56.243003 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c793aa4-86ea-41a2-875f-a944cfcb19b9" containerName="extract" Feb 03 00:26:56 crc kubenswrapper[4798]: E0203 00:26:56.243028 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c793aa4-86ea-41a2-875f-a944cfcb19b9" containerName="pull" Feb 03 00:26:56 crc kubenswrapper[4798]: I0203 00:26:56.243036 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c793aa4-86ea-41a2-875f-a944cfcb19b9" containerName="pull" Feb 03 00:26:56 crc kubenswrapper[4798]: E0203 00:26:56.243049 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c793aa4-86ea-41a2-875f-a944cfcb19b9" containerName="util" Feb 03 00:26:56 crc kubenswrapper[4798]: I0203 00:26:56.243057 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c793aa4-86ea-41a2-875f-a944cfcb19b9" containerName="util" Feb 03 00:26:56 crc kubenswrapper[4798]: I0203 00:26:56.243187 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c793aa4-86ea-41a2-875f-a944cfcb19b9" containerName="extract" Feb 03 00:26:56 crc kubenswrapper[4798]: I0203 00:26:56.244054 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqfpwd" Feb 03 00:26:56 crc kubenswrapper[4798]: I0203 00:26:56.246676 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 03 00:26:56 crc kubenswrapper[4798]: I0203 00:26:56.251500 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqfpwd"] Feb 03 00:26:56 crc kubenswrapper[4798]: I0203 00:26:56.348494 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wrjs\" (UniqueName: \"kubernetes.io/projected/872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c-kube-api-access-5wrjs\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqfpwd\" (UID: \"872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqfpwd" Feb 03 00:26:56 crc kubenswrapper[4798]: I0203 00:26:56.348562 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqfpwd\" (UID: \"872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqfpwd" Feb 03 00:26:56 crc kubenswrapper[4798]: I0203 00:26:56.348603 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqfpwd\" (UID: \"872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqfpwd" Feb 03 00:26:56 crc kubenswrapper[4798]: I0203 00:26:56.450121 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wrjs\" (UniqueName: \"kubernetes.io/projected/872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c-kube-api-access-5wrjs\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqfpwd\" (UID: \"872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqfpwd" Feb 03 00:26:56 crc kubenswrapper[4798]: I0203 00:26:56.450203 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqfpwd\" (UID: \"872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqfpwd" Feb 03 00:26:56 crc kubenswrapper[4798]: I0203 00:26:56.450254 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqfpwd\" (UID: \"872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqfpwd" Feb 03 00:26:56 crc kubenswrapper[4798]: I0203 00:26:56.450674 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqfpwd\" (UID: \"872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqfpwd" Feb 03 00:26:56 crc kubenswrapper[4798]: I0203 00:26:56.450676 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqfpwd\" (UID: \"872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqfpwd" Feb 03 00:26:56 crc kubenswrapper[4798]: I0203 00:26:56.476393 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wrjs\" (UniqueName: \"kubernetes.io/projected/872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c-kube-api-access-5wrjs\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqfpwd\" (UID: \"872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqfpwd" Feb 03 00:26:56 crc kubenswrapper[4798]: I0203 00:26:56.558414 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqfpwd" Feb 03 00:26:56 crc kubenswrapper[4798]: I0203 00:26:56.949431 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqfpwd"] Feb 03 00:26:57 crc kubenswrapper[4798]: E0203 00:26:57.207062 4798 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod872f0c5a_672d_4f2a_a4ce_ac6db6a8c75c.slice/crio-conmon-e6d34ecbea49772eb4fe1be310b4bdd5c048197c00bacc3740972bbd9781807a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod872f0c5a_672d_4f2a_a4ce_ac6db6a8c75c.slice/crio-e6d34ecbea49772eb4fe1be310b4bdd5c048197c00bacc3740972bbd9781807a.scope\": RecentStats: unable to find data in memory cache]" Feb 03 00:26:57 crc kubenswrapper[4798]: I0203 00:26:57.745729 4798 generic.go:334] "Generic (PLEG): container finished" podID="872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c" containerID="e6d34ecbea49772eb4fe1be310b4bdd5c048197c00bacc3740972bbd9781807a" exitCode=0 Feb 03 00:26:57 crc kubenswrapper[4798]: I0203 00:26:57.745776 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqfpwd" event={"ID":"872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c","Type":"ContainerDied","Data":"e6d34ecbea49772eb4fe1be310b4bdd5c048197c00bacc3740972bbd9781807a"} Feb 03 00:26:57 crc kubenswrapper[4798]: I0203 00:26:57.745807 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqfpwd" event={"ID":"872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c","Type":"ContainerStarted","Data":"46e54aa3833fd8d4bae73a76ce6feca6b3d82c875994da7eeb1c3a3bbd65b4db"} Feb 03 00:26:59 crc kubenswrapper[4798]: I0203 00:26:59.759166 4798 generic.go:334] "Generic (PLEG): container finished" podID="872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c" containerID="a40a45be91708235c345d86f484149e41f4933aa7ba8ae724d48fcd1c4ed75da" exitCode=0 Feb 03 00:26:59 crc kubenswrapper[4798]: I0203 00:26:59.759210 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqfpwd" event={"ID":"872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c","Type":"ContainerDied","Data":"a40a45be91708235c345d86f484149e41f4933aa7ba8ae724d48fcd1c4ed75da"} Feb 03 00:27:00 crc kubenswrapper[4798]: I0203 00:27:00.766328 4798 generic.go:334] "Generic (PLEG): container finished" podID="872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c" containerID="7c2a356f39051394ddcaf4fe8cd50ff1ea105574e1248f028c74a7be2cb1f68f" exitCode=0 Feb 03 00:27:00 crc kubenswrapper[4798]: I0203 00:27:00.766439 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqfpwd" event={"ID":"872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c","Type":"ContainerDied","Data":"7c2a356f39051394ddcaf4fe8cd50ff1ea105574e1248f028c74a7be2cb1f68f"} Feb 03 00:27:02 crc kubenswrapper[4798]: I0203 00:27:02.148169 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqfpwd" Feb 03 00:27:02 crc kubenswrapper[4798]: I0203 00:27:02.229566 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c-util\") pod \"872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c\" (UID: \"872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c\") " Feb 03 00:27:02 crc kubenswrapper[4798]: I0203 00:27:02.229639 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wrjs\" (UniqueName: \"kubernetes.io/projected/872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c-kube-api-access-5wrjs\") pod \"872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c\" (UID: \"872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c\") " Feb 03 00:27:02 crc kubenswrapper[4798]: I0203 00:27:02.229761 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c-bundle\") pod \"872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c\" (UID: \"872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c\") " Feb 03 00:27:02 crc kubenswrapper[4798]: I0203 00:27:02.230907 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c-bundle" (OuterVolumeSpecName: "bundle") pod "872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c" (UID: "872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:27:02 crc kubenswrapper[4798]: I0203 00:27:02.242200 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c-kube-api-access-5wrjs" (OuterVolumeSpecName: "kube-api-access-5wrjs") pod "872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c" (UID: "872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c"). InnerVolumeSpecName "kube-api-access-5wrjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:27:02 crc kubenswrapper[4798]: I0203 00:27:02.331343 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wrjs\" (UniqueName: \"kubernetes.io/projected/872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c-kube-api-access-5wrjs\") on node \"crc\" DevicePath \"\"" Feb 03 00:27:02 crc kubenswrapper[4798]: I0203 00:27:02.331378 4798 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 00:27:02 crc kubenswrapper[4798]: I0203 00:27:02.345844 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55bgzx"] Feb 03 00:27:02 crc kubenswrapper[4798]: E0203 00:27:02.346044 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c" containerName="extract" Feb 03 00:27:02 crc kubenswrapper[4798]: I0203 00:27:02.346055 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c" containerName="extract" Feb 03 00:27:02 crc kubenswrapper[4798]: E0203 00:27:02.346066 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c" containerName="pull" Feb 03 00:27:02 crc kubenswrapper[4798]: I0203 00:27:02.346071 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c" containerName="pull" Feb 03 00:27:02 crc kubenswrapper[4798]: E0203 00:27:02.346081 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c" containerName="util" Feb 03 00:27:02 crc kubenswrapper[4798]: I0203 00:27:02.346087 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c" containerName="util" Feb 03 00:27:02 crc kubenswrapper[4798]: I0203 00:27:02.346172 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c" containerName="extract" Feb 03 00:27:02 crc kubenswrapper[4798]: I0203 00:27:02.346931 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55bgzx" Feb 03 00:27:02 crc kubenswrapper[4798]: I0203 00:27:02.362965 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55bgzx"] Feb 03 00:27:02 crc kubenswrapper[4798]: I0203 00:27:02.432953 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/71af01f0-a9e8-4957-a949-22963b9fa386-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55bgzx\" (UID: \"71af01f0-a9e8-4957-a949-22963b9fa386\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55bgzx" Feb 03 00:27:02 crc kubenswrapper[4798]: I0203 00:27:02.432994 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g2m5\" (UniqueName: \"kubernetes.io/projected/71af01f0-a9e8-4957-a949-22963b9fa386-kube-api-access-7g2m5\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55bgzx\" (UID: \"71af01f0-a9e8-4957-a949-22963b9fa386\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55bgzx" Feb 03 00:27:02 crc kubenswrapper[4798]: I0203 00:27:02.433021 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/71af01f0-a9e8-4957-a949-22963b9fa386-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55bgzx\" (UID: \"71af01f0-a9e8-4957-a949-22963b9fa386\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55bgzx" Feb 03 00:27:02 crc kubenswrapper[4798]: I0203 00:27:02.534475 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/71af01f0-a9e8-4957-a949-22963b9fa386-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55bgzx\" (UID: \"71af01f0-a9e8-4957-a949-22963b9fa386\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55bgzx" Feb 03 00:27:02 crc kubenswrapper[4798]: I0203 00:27:02.534515 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g2m5\" (UniqueName: \"kubernetes.io/projected/71af01f0-a9e8-4957-a949-22963b9fa386-kube-api-access-7g2m5\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55bgzx\" (UID: \"71af01f0-a9e8-4957-a949-22963b9fa386\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55bgzx" Feb 03 00:27:02 crc kubenswrapper[4798]: I0203 00:27:02.534537 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/71af01f0-a9e8-4957-a949-22963b9fa386-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55bgzx\" (UID: \"71af01f0-a9e8-4957-a949-22963b9fa386\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55bgzx" Feb 03 00:27:02 crc kubenswrapper[4798]: I0203 00:27:02.534959 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/71af01f0-a9e8-4957-a949-22963b9fa386-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55bgzx\" (UID: \"71af01f0-a9e8-4957-a949-22963b9fa386\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55bgzx" Feb 03 00:27:02 crc kubenswrapper[4798]: I0203 00:27:02.535069 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/71af01f0-a9e8-4957-a949-22963b9fa386-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55bgzx\" (UID: \"71af01f0-a9e8-4957-a949-22963b9fa386\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55bgzx" Feb 03 00:27:02 crc kubenswrapper[4798]: I0203 00:27:02.575572 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g2m5\" (UniqueName: \"kubernetes.io/projected/71af01f0-a9e8-4957-a949-22963b9fa386-kube-api-access-7g2m5\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55bgzx\" (UID: \"71af01f0-a9e8-4957-a949-22963b9fa386\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55bgzx" Feb 03 00:27:02 crc kubenswrapper[4798]: I0203 00:27:02.639143 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c-util" (OuterVolumeSpecName: "util") pod "872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c" (UID: "872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:27:02 crc kubenswrapper[4798]: I0203 00:27:02.659097 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55bgzx" Feb 03 00:27:02 crc kubenswrapper[4798]: I0203 00:27:02.737563 4798 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c-util\") on node \"crc\" DevicePath \"\"" Feb 03 00:27:02 crc kubenswrapper[4798]: I0203 00:27:02.793967 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqfpwd" event={"ID":"872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c","Type":"ContainerDied","Data":"46e54aa3833fd8d4bae73a76ce6feca6b3d82c875994da7eeb1c3a3bbd65b4db"} Feb 03 00:27:02 crc kubenswrapper[4798]: I0203 00:27:02.794017 4798 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46e54aa3833fd8d4bae73a76ce6feca6b3d82c875994da7eeb1c3a3bbd65b4db" Feb 03 00:27:02 crc kubenswrapper[4798]: I0203 00:27:02.794032 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqfpwd" Feb 03 00:27:02 crc kubenswrapper[4798]: I0203 00:27:02.993922 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55bgzx"] Feb 03 00:27:03 crc kubenswrapper[4798]: I0203 00:27:03.801421 4798 generic.go:334] "Generic (PLEG): container finished" podID="71af01f0-a9e8-4957-a949-22963b9fa386" containerID="a09da4a000f5a8bbe99187fc1235ac631390ea4182c1a54bd93ed9d028537db3" exitCode=0 Feb 03 00:27:03 crc kubenswrapper[4798]: I0203 00:27:03.801520 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55bgzx" event={"ID":"71af01f0-a9e8-4957-a949-22963b9fa386","Type":"ContainerDied","Data":"a09da4a000f5a8bbe99187fc1235ac631390ea4182c1a54bd93ed9d028537db3"} Feb 03 00:27:03 crc kubenswrapper[4798]: I0203 00:27:03.801626 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55bgzx" event={"ID":"71af01f0-a9e8-4957-a949-22963b9fa386","Type":"ContainerStarted","Data":"2cba1ae04d20636104cede45504ce2debb1a033dac4113e63b80f65a22f2bd8f"} Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.127132 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-tp6c4"] Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.128180 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-tp6c4" Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.131045 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.131133 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.137395 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-tp6c4"] Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.139644 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-b9pkf" Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.184208 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8b2f\" (UniqueName: \"kubernetes.io/projected/2c3d3d69-dea0-40fb-9d55-252fb8b34c1c-kube-api-access-x8b2f\") pod \"obo-prometheus-operator-68bc856cb9-tp6c4\" (UID: \"2c3d3d69-dea0-40fb-9d55-252fb8b34c1c\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-tp6c4" Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.248602 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8cc945bb6-bm7fg"] Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.249234 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8cc945bb6-bm7fg" Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.254567 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8cc945bb6-42k7l"] Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.255188 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8cc945bb6-42k7l" Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.263369 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.263460 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-srx2f" Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.266104 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8cc945bb6-bm7fg"] Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.274312 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8cc945bb6-42k7l"] Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.284891 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/28f1b740-046a-47f9-9257-62e739509702-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8cc945bb6-42k7l\" (UID: \"28f1b740-046a-47f9-9257-62e739509702\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8cc945bb6-42k7l" Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.284926 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/28f1b740-046a-47f9-9257-62e739509702-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8cc945bb6-42k7l\" (UID: \"28f1b740-046a-47f9-9257-62e739509702\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8cc945bb6-42k7l" Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.284947 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8b2f\" (UniqueName: \"kubernetes.io/projected/2c3d3d69-dea0-40fb-9d55-252fb8b34c1c-kube-api-access-x8b2f\") pod \"obo-prometheus-operator-68bc856cb9-tp6c4\" (UID: \"2c3d3d69-dea0-40fb-9d55-252fb8b34c1c\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-tp6c4" Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.284966 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7ac09692-d88a-47d1-b4e6-05cc3020ebf1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8cc945bb6-bm7fg\" (UID: \"7ac09692-d88a-47d1-b4e6-05cc3020ebf1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8cc945bb6-bm7fg" Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.285083 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7ac09692-d88a-47d1-b4e6-05cc3020ebf1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8cc945bb6-bm7fg\" (UID: \"7ac09692-d88a-47d1-b4e6-05cc3020ebf1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8cc945bb6-bm7fg" Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.311181 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8b2f\" (UniqueName: \"kubernetes.io/projected/2c3d3d69-dea0-40fb-9d55-252fb8b34c1c-kube-api-access-x8b2f\") pod \"obo-prometheus-operator-68bc856cb9-tp6c4\" (UID: \"2c3d3d69-dea0-40fb-9d55-252fb8b34c1c\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-tp6c4" Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.386798 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/28f1b740-046a-47f9-9257-62e739509702-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8cc945bb6-42k7l\" (UID: \"28f1b740-046a-47f9-9257-62e739509702\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8cc945bb6-42k7l" Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.387111 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/28f1b740-046a-47f9-9257-62e739509702-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8cc945bb6-42k7l\" (UID: \"28f1b740-046a-47f9-9257-62e739509702\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8cc945bb6-42k7l" Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.387201 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7ac09692-d88a-47d1-b4e6-05cc3020ebf1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8cc945bb6-bm7fg\" (UID: \"7ac09692-d88a-47d1-b4e6-05cc3020ebf1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8cc945bb6-bm7fg" Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.387301 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7ac09692-d88a-47d1-b4e6-05cc3020ebf1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8cc945bb6-bm7fg\" (UID: \"7ac09692-d88a-47d1-b4e6-05cc3020ebf1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8cc945bb6-bm7fg" Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.393420 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7ac09692-d88a-47d1-b4e6-05cc3020ebf1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8cc945bb6-bm7fg\" (UID: \"7ac09692-d88a-47d1-b4e6-05cc3020ebf1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8cc945bb6-bm7fg" Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.393519 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/28f1b740-046a-47f9-9257-62e739509702-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8cc945bb6-42k7l\" (UID: \"28f1b740-046a-47f9-9257-62e739509702\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8cc945bb6-42k7l" Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.394335 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/28f1b740-046a-47f9-9257-62e739509702-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8cc945bb6-42k7l\" (UID: \"28f1b740-046a-47f9-9257-62e739509702\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8cc945bb6-42k7l" Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.394744 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7ac09692-d88a-47d1-b4e6-05cc3020ebf1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8cc945bb6-bm7fg\" (UID: \"7ac09692-d88a-47d1-b4e6-05cc3020ebf1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8cc945bb6-bm7fg" Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.469814 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-wpfwr"] Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.471936 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-wpfwr" Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.478108 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.478468 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-74p4g" Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.487852 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b25c3b0d-a5bd-4d7b-bab4-dc475d1c4deb-observability-operator-tls\") pod \"observability-operator-59bdc8b94-wpfwr\" (UID: \"b25c3b0d-a5bd-4d7b-bab4-dc475d1c4deb\") " pod="openshift-operators/observability-operator-59bdc8b94-wpfwr" Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.488485 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrss2\" (UniqueName: \"kubernetes.io/projected/b25c3b0d-a5bd-4d7b-bab4-dc475d1c4deb-kube-api-access-lrss2\") pod \"observability-operator-59bdc8b94-wpfwr\" (UID: \"b25c3b0d-a5bd-4d7b-bab4-dc475d1c4deb\") " pod="openshift-operators/observability-operator-59bdc8b94-wpfwr" Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.493210 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-tp6c4" Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.497359 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-wpfwr"] Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.581296 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8cc945bb6-bm7fg" Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.588728 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8cc945bb6-42k7l" Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.589220 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b25c3b0d-a5bd-4d7b-bab4-dc475d1c4deb-observability-operator-tls\") pod \"observability-operator-59bdc8b94-wpfwr\" (UID: \"b25c3b0d-a5bd-4d7b-bab4-dc475d1c4deb\") " pod="openshift-operators/observability-operator-59bdc8b94-wpfwr" Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.589268 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrss2\" (UniqueName: \"kubernetes.io/projected/b25c3b0d-a5bd-4d7b-bab4-dc475d1c4deb-kube-api-access-lrss2\") pod \"observability-operator-59bdc8b94-wpfwr\" (UID: \"b25c3b0d-a5bd-4d7b-bab4-dc475d1c4deb\") " pod="openshift-operators/observability-operator-59bdc8b94-wpfwr" Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.594013 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b25c3b0d-a5bd-4d7b-bab4-dc475d1c4deb-observability-operator-tls\") pod \"observability-operator-59bdc8b94-wpfwr\" (UID: \"b25c3b0d-a5bd-4d7b-bab4-dc475d1c4deb\") " pod="openshift-operators/observability-operator-59bdc8b94-wpfwr" Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.621110 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrss2\" (UniqueName: \"kubernetes.io/projected/b25c3b0d-a5bd-4d7b-bab4-dc475d1c4deb-kube-api-access-lrss2\") pod \"observability-operator-59bdc8b94-wpfwr\" (UID: \"b25c3b0d-a5bd-4d7b-bab4-dc475d1c4deb\") " pod="openshift-operators/observability-operator-59bdc8b94-wpfwr" Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.654649 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-dlssk"] Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.655741 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-dlssk" Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.662884 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-5f2q5" Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.672512 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-dlssk"] Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.690088 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/4e8887da-9f98-4a78-a5df-756bf2d2d31e-openshift-service-ca\") pod \"perses-operator-5bf474d74f-dlssk\" (UID: \"4e8887da-9f98-4a78-a5df-756bf2d2d31e\") " pod="openshift-operators/perses-operator-5bf474d74f-dlssk" Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.690165 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-286lx\" (UniqueName: \"kubernetes.io/projected/4e8887da-9f98-4a78-a5df-756bf2d2d31e-kube-api-access-286lx\") pod \"perses-operator-5bf474d74f-dlssk\" (UID: \"4e8887da-9f98-4a78-a5df-756bf2d2d31e\") " pod="openshift-operators/perses-operator-5bf474d74f-dlssk" Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.793120 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/4e8887da-9f98-4a78-a5df-756bf2d2d31e-openshift-service-ca\") pod \"perses-operator-5bf474d74f-dlssk\" (UID: \"4e8887da-9f98-4a78-a5df-756bf2d2d31e\") " pod="openshift-operators/perses-operator-5bf474d74f-dlssk" Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.793180 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-286lx\" (UniqueName: \"kubernetes.io/projected/4e8887da-9f98-4a78-a5df-756bf2d2d31e-kube-api-access-286lx\") pod \"perses-operator-5bf474d74f-dlssk\" (UID: \"4e8887da-9f98-4a78-a5df-756bf2d2d31e\") " pod="openshift-operators/perses-operator-5bf474d74f-dlssk" Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.794413 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/4e8887da-9f98-4a78-a5df-756bf2d2d31e-openshift-service-ca\") pod \"perses-operator-5bf474d74f-dlssk\" (UID: \"4e8887da-9f98-4a78-a5df-756bf2d2d31e\") " pod="openshift-operators/perses-operator-5bf474d74f-dlssk" Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.806887 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-wpfwr" Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.811233 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-286lx\" (UniqueName: \"kubernetes.io/projected/4e8887da-9f98-4a78-a5df-756bf2d2d31e-kube-api-access-286lx\") pod \"perses-operator-5bf474d74f-dlssk\" (UID: \"4e8887da-9f98-4a78-a5df-756bf2d2d31e\") " pod="openshift-operators/perses-operator-5bf474d74f-dlssk" Feb 03 00:27:06 crc kubenswrapper[4798]: I0203 00:27:06.972788 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-dlssk" Feb 03 00:27:08 crc kubenswrapper[4798]: I0203 00:27:08.217974 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8cc945bb6-42k7l"] Feb 03 00:27:08 crc kubenswrapper[4798]: I0203 00:27:08.234381 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8cc945bb6-bm7fg"] Feb 03 00:27:08 crc kubenswrapper[4798]: I0203 00:27:08.239335 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-wpfwr"] Feb 03 00:27:08 crc kubenswrapper[4798]: I0203 00:27:08.320092 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-tp6c4"] Feb 03 00:27:08 crc kubenswrapper[4798]: I0203 00:27:08.320144 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-dlssk"] Feb 03 00:27:08 crc kubenswrapper[4798]: W0203 00:27:08.329251 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e8887da_9f98_4a78_a5df_756bf2d2d31e.slice/crio-def9bfbb9d8c1e4b8d361a835ce6d12fe51303d7befc73fe4c3e9a7e0a995d51 WatchSource:0}: Error finding container def9bfbb9d8c1e4b8d361a835ce6d12fe51303d7befc73fe4c3e9a7e0a995d51: Status 404 returned error can't find the container with id def9bfbb9d8c1e4b8d361a835ce6d12fe51303d7befc73fe4c3e9a7e0a995d51 Feb 03 00:27:08 crc kubenswrapper[4798]: W0203 00:27:08.350452 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c3d3d69_dea0_40fb_9d55_252fb8b34c1c.slice/crio-2ce49d0e837683dcf3cacef295529cf8813010657cd6e4667d158526dfc982b5 WatchSource:0}: Error finding container 2ce49d0e837683dcf3cacef295529cf8813010657cd6e4667d158526dfc982b5: Status 404 returned error can't find the container with id 2ce49d0e837683dcf3cacef295529cf8813010657cd6e4667d158526dfc982b5 Feb 03 00:27:08 crc kubenswrapper[4798]: I0203 00:27:08.830206 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8cc945bb6-42k7l" event={"ID":"28f1b740-046a-47f9-9257-62e739509702","Type":"ContainerStarted","Data":"f11a07f9329abd7e5a7cbfc15557b42839d5789a8c87fc33c17ea53bd45520b0"} Feb 03 00:27:08 crc kubenswrapper[4798]: I0203 00:27:08.835450 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-wpfwr" event={"ID":"b25c3b0d-a5bd-4d7b-bab4-dc475d1c4deb","Type":"ContainerStarted","Data":"eefc201b42aa0dc779b483f2477106178c8d84b642001ab1258e7b0cdded4f95"} Feb 03 00:27:08 crc kubenswrapper[4798]: I0203 00:27:08.838553 4798 generic.go:334] "Generic (PLEG): container finished" podID="71af01f0-a9e8-4957-a949-22963b9fa386" containerID="d901dd9d7161d626cfc343434fd7d2b334942176730afbfc3e185d882b0258c3" exitCode=0 Feb 03 00:27:08 crc kubenswrapper[4798]: I0203 00:27:08.838660 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55bgzx" event={"ID":"71af01f0-a9e8-4957-a949-22963b9fa386","Type":"ContainerDied","Data":"d901dd9d7161d626cfc343434fd7d2b334942176730afbfc3e185d882b0258c3"} Feb 03 00:27:08 crc kubenswrapper[4798]: I0203 00:27:08.840963 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8cc945bb6-bm7fg" event={"ID":"7ac09692-d88a-47d1-b4e6-05cc3020ebf1","Type":"ContainerStarted","Data":"71db9f300c37493a53bc66296ca96b4864531179d7d32bc989ddd942a836a7ce"} Feb 03 00:27:08 crc kubenswrapper[4798]: I0203 00:27:08.845386 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-tp6c4" event={"ID":"2c3d3d69-dea0-40fb-9d55-252fb8b34c1c","Type":"ContainerStarted","Data":"2ce49d0e837683dcf3cacef295529cf8813010657cd6e4667d158526dfc982b5"} Feb 03 00:27:08 crc kubenswrapper[4798]: I0203 00:27:08.846733 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-dlssk" event={"ID":"4e8887da-9f98-4a78-a5df-756bf2d2d31e","Type":"ContainerStarted","Data":"def9bfbb9d8c1e4b8d361a835ce6d12fe51303d7befc73fe4c3e9a7e0a995d51"} Feb 03 00:27:09 crc kubenswrapper[4798]: I0203 00:27:09.867529 4798 generic.go:334] "Generic (PLEG): container finished" podID="71af01f0-a9e8-4957-a949-22963b9fa386" containerID="17839f90036c3ae7f1b1224bdbfac66c19667eca74a73545b6fb4d3cd49c94fa" exitCode=0 Feb 03 00:27:09 crc kubenswrapper[4798]: I0203 00:27:09.867597 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55bgzx" event={"ID":"71af01f0-a9e8-4957-a949-22963b9fa386","Type":"ContainerDied","Data":"17839f90036c3ae7f1b1224bdbfac66c19667eca74a73545b6fb4d3cd49c94fa"} Feb 03 00:27:11 crc kubenswrapper[4798]: I0203 00:27:11.190604 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55bgzx" Feb 03 00:27:11 crc kubenswrapper[4798]: I0203 00:27:11.366101 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7g2m5\" (UniqueName: \"kubernetes.io/projected/71af01f0-a9e8-4957-a949-22963b9fa386-kube-api-access-7g2m5\") pod \"71af01f0-a9e8-4957-a949-22963b9fa386\" (UID: \"71af01f0-a9e8-4957-a949-22963b9fa386\") " Feb 03 00:27:11 crc kubenswrapper[4798]: I0203 00:27:11.366284 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/71af01f0-a9e8-4957-a949-22963b9fa386-util\") pod \"71af01f0-a9e8-4957-a949-22963b9fa386\" (UID: \"71af01f0-a9e8-4957-a949-22963b9fa386\") " Feb 03 00:27:11 crc kubenswrapper[4798]: I0203 00:27:11.366309 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/71af01f0-a9e8-4957-a949-22963b9fa386-bundle\") pod \"71af01f0-a9e8-4957-a949-22963b9fa386\" (UID: \"71af01f0-a9e8-4957-a949-22963b9fa386\") " Feb 03 00:27:11 crc kubenswrapper[4798]: I0203 00:27:11.368012 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71af01f0-a9e8-4957-a949-22963b9fa386-bundle" (OuterVolumeSpecName: "bundle") pod "71af01f0-a9e8-4957-a949-22963b9fa386" (UID: "71af01f0-a9e8-4957-a949-22963b9fa386"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:27:11 crc kubenswrapper[4798]: I0203 00:27:11.382081 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71af01f0-a9e8-4957-a949-22963b9fa386-util" (OuterVolumeSpecName: "util") pod "71af01f0-a9e8-4957-a949-22963b9fa386" (UID: "71af01f0-a9e8-4957-a949-22963b9fa386"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:27:11 crc kubenswrapper[4798]: I0203 00:27:11.384766 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71af01f0-a9e8-4957-a949-22963b9fa386-kube-api-access-7g2m5" (OuterVolumeSpecName: "kube-api-access-7g2m5") pod "71af01f0-a9e8-4957-a949-22963b9fa386" (UID: "71af01f0-a9e8-4957-a949-22963b9fa386"). InnerVolumeSpecName "kube-api-access-7g2m5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:27:11 crc kubenswrapper[4798]: I0203 00:27:11.471281 4798 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/71af01f0-a9e8-4957-a949-22963b9fa386-util\") on node \"crc\" DevicePath \"\"" Feb 03 00:27:11 crc kubenswrapper[4798]: I0203 00:27:11.471316 4798 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/71af01f0-a9e8-4957-a949-22963b9fa386-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 00:27:11 crc kubenswrapper[4798]: I0203 00:27:11.471325 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7g2m5\" (UniqueName: \"kubernetes.io/projected/71af01f0-a9e8-4957-a949-22963b9fa386-kube-api-access-7g2m5\") on node \"crc\" DevicePath \"\"" Feb 03 00:27:11 crc kubenswrapper[4798]: I0203 00:27:11.889738 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55bgzx" event={"ID":"71af01f0-a9e8-4957-a949-22963b9fa386","Type":"ContainerDied","Data":"2cba1ae04d20636104cede45504ce2debb1a033dac4113e63b80f65a22f2bd8f"} Feb 03 00:27:11 crc kubenswrapper[4798]: I0203 00:27:11.889771 4798 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cba1ae04d20636104cede45504ce2debb1a033dac4113e63b80f65a22f2bd8f" Feb 03 00:27:11 crc kubenswrapper[4798]: I0203 00:27:11.889829 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55bgzx" Feb 03 00:27:12 crc kubenswrapper[4798]: I0203 00:27:12.575119 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-74fd7d6bb7-l7bs9"] Feb 03 00:27:12 crc kubenswrapper[4798]: E0203 00:27:12.575326 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71af01f0-a9e8-4957-a949-22963b9fa386" containerName="pull" Feb 03 00:27:12 crc kubenswrapper[4798]: I0203 00:27:12.575338 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="71af01f0-a9e8-4957-a949-22963b9fa386" containerName="pull" Feb 03 00:27:12 crc kubenswrapper[4798]: E0203 00:27:12.575353 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71af01f0-a9e8-4957-a949-22963b9fa386" containerName="extract" Feb 03 00:27:12 crc kubenswrapper[4798]: I0203 00:27:12.575359 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="71af01f0-a9e8-4957-a949-22963b9fa386" containerName="extract" Feb 03 00:27:12 crc kubenswrapper[4798]: E0203 00:27:12.575372 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71af01f0-a9e8-4957-a949-22963b9fa386" containerName="util" Feb 03 00:27:12 crc kubenswrapper[4798]: I0203 00:27:12.575378 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="71af01f0-a9e8-4957-a949-22963b9fa386" containerName="util" Feb 03 00:27:12 crc kubenswrapper[4798]: I0203 00:27:12.575463 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="71af01f0-a9e8-4957-a949-22963b9fa386" containerName="extract" Feb 03 00:27:12 crc kubenswrapper[4798]: I0203 00:27:12.575839 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-74fd7d6bb7-l7bs9" Feb 03 00:27:12 crc kubenswrapper[4798]: I0203 00:27:12.577578 4798 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-dockercfg-krbl6" Feb 03 00:27:12 crc kubenswrapper[4798]: I0203 00:27:12.577786 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"kube-root-ca.crt" Feb 03 00:27:12 crc kubenswrapper[4798]: I0203 00:27:12.577910 4798 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-service-cert" Feb 03 00:27:12 crc kubenswrapper[4798]: I0203 00:27:12.577910 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"openshift-service-ca.crt" Feb 03 00:27:12 crc kubenswrapper[4798]: I0203 00:27:12.586709 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e199491f-678a-4db6-9604-578e5d834b84-apiservice-cert\") pod \"elastic-operator-74fd7d6bb7-l7bs9\" (UID: \"e199491f-678a-4db6-9604-578e5d834b84\") " pod="service-telemetry/elastic-operator-74fd7d6bb7-l7bs9" Feb 03 00:27:12 crc kubenswrapper[4798]: I0203 00:27:12.586798 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pmdc\" (UniqueName: \"kubernetes.io/projected/e199491f-678a-4db6-9604-578e5d834b84-kube-api-access-5pmdc\") pod \"elastic-operator-74fd7d6bb7-l7bs9\" (UID: \"e199491f-678a-4db6-9604-578e5d834b84\") " pod="service-telemetry/elastic-operator-74fd7d6bb7-l7bs9" Feb 03 00:27:12 crc kubenswrapper[4798]: I0203 00:27:12.586854 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e199491f-678a-4db6-9604-578e5d834b84-webhook-cert\") pod \"elastic-operator-74fd7d6bb7-l7bs9\" (UID: \"e199491f-678a-4db6-9604-578e5d834b84\") " pod="service-telemetry/elastic-operator-74fd7d6bb7-l7bs9" Feb 03 00:27:12 crc kubenswrapper[4798]: I0203 00:27:12.597659 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-74fd7d6bb7-l7bs9"] Feb 03 00:27:12 crc kubenswrapper[4798]: I0203 00:27:12.688225 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e199491f-678a-4db6-9604-578e5d834b84-apiservice-cert\") pod \"elastic-operator-74fd7d6bb7-l7bs9\" (UID: \"e199491f-678a-4db6-9604-578e5d834b84\") " pod="service-telemetry/elastic-operator-74fd7d6bb7-l7bs9" Feb 03 00:27:12 crc kubenswrapper[4798]: I0203 00:27:12.688289 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pmdc\" (UniqueName: \"kubernetes.io/projected/e199491f-678a-4db6-9604-578e5d834b84-kube-api-access-5pmdc\") pod \"elastic-operator-74fd7d6bb7-l7bs9\" (UID: \"e199491f-678a-4db6-9604-578e5d834b84\") " pod="service-telemetry/elastic-operator-74fd7d6bb7-l7bs9" Feb 03 00:27:12 crc kubenswrapper[4798]: I0203 00:27:12.688520 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e199491f-678a-4db6-9604-578e5d834b84-webhook-cert\") pod \"elastic-operator-74fd7d6bb7-l7bs9\" (UID: \"e199491f-678a-4db6-9604-578e5d834b84\") " pod="service-telemetry/elastic-operator-74fd7d6bb7-l7bs9" Feb 03 00:27:12 crc kubenswrapper[4798]: I0203 00:27:12.692596 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e199491f-678a-4db6-9604-578e5d834b84-webhook-cert\") pod \"elastic-operator-74fd7d6bb7-l7bs9\" (UID: \"e199491f-678a-4db6-9604-578e5d834b84\") " pod="service-telemetry/elastic-operator-74fd7d6bb7-l7bs9" Feb 03 00:27:12 crc kubenswrapper[4798]: I0203 00:27:12.693413 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e199491f-678a-4db6-9604-578e5d834b84-apiservice-cert\") pod \"elastic-operator-74fd7d6bb7-l7bs9\" (UID: \"e199491f-678a-4db6-9604-578e5d834b84\") " pod="service-telemetry/elastic-operator-74fd7d6bb7-l7bs9" Feb 03 00:27:12 crc kubenswrapper[4798]: I0203 00:27:12.708457 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pmdc\" (UniqueName: \"kubernetes.io/projected/e199491f-678a-4db6-9604-578e5d834b84-kube-api-access-5pmdc\") pod \"elastic-operator-74fd7d6bb7-l7bs9\" (UID: \"e199491f-678a-4db6-9604-578e5d834b84\") " pod="service-telemetry/elastic-operator-74fd7d6bb7-l7bs9" Feb 03 00:27:12 crc kubenswrapper[4798]: I0203 00:27:12.890517 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-74fd7d6bb7-l7bs9" Feb 03 00:27:18 crc kubenswrapper[4798]: I0203 00:27:18.374083 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-74fd7d6bb7-l7bs9"] Feb 03 00:27:18 crc kubenswrapper[4798]: W0203 00:27:18.381138 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode199491f_678a_4db6_9604_578e5d834b84.slice/crio-de67f34448b5983df911bf7e56e036e40ebe763e6e802dfc0f53e8a3e8179fe2 WatchSource:0}: Error finding container de67f34448b5983df911bf7e56e036e40ebe763e6e802dfc0f53e8a3e8179fe2: Status 404 returned error can't find the container with id de67f34448b5983df911bf7e56e036e40ebe763e6e802dfc0f53e8a3e8179fe2 Feb 03 00:27:18 crc kubenswrapper[4798]: I0203 00:27:18.951509 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8cc945bb6-42k7l" event={"ID":"28f1b740-046a-47f9-9257-62e739509702","Type":"ContainerStarted","Data":"bcf541a178720a613ef430cc70e8a908ad5faaa97128a35bfdf73f1e48743006"} Feb 03 00:27:18 crc kubenswrapper[4798]: I0203 00:27:18.953231 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-wpfwr" event={"ID":"b25c3b0d-a5bd-4d7b-bab4-dc475d1c4deb","Type":"ContainerStarted","Data":"0cf884acfede97d6fe9f4bfb2bd243fa11db6d44fe470dd4dc3c869481d9d815"} Feb 03 00:27:18 crc kubenswrapper[4798]: I0203 00:27:18.953593 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-wpfwr" Feb 03 00:27:18 crc kubenswrapper[4798]: I0203 00:27:18.955123 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8cc945bb6-bm7fg" event={"ID":"7ac09692-d88a-47d1-b4e6-05cc3020ebf1","Type":"ContainerStarted","Data":"563776f38f30ae7b6986d61c3d4a5728fcb54c5e6a3cf662b9c329e7e050f221"} Feb 03 00:27:18 crc kubenswrapper[4798]: I0203 00:27:18.955774 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-wpfwr" Feb 03 00:27:18 crc kubenswrapper[4798]: I0203 00:27:18.956826 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-tp6c4" event={"ID":"2c3d3d69-dea0-40fb-9d55-252fb8b34c1c","Type":"ContainerStarted","Data":"269cc2704376dddb1a3d4eb484f8cba595ff3c1479bb36dc001692e5e9960397"} Feb 03 00:27:18 crc kubenswrapper[4798]: I0203 00:27:18.958742 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-dlssk" event={"ID":"4e8887da-9f98-4a78-a5df-756bf2d2d31e","Type":"ContainerStarted","Data":"90f5bd2fab79732908888fc617aae9d1b965fae96fa4d7a94332a8704eeb1c5f"} Feb 03 00:27:18 crc kubenswrapper[4798]: I0203 00:27:18.959097 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-dlssk" Feb 03 00:27:18 crc kubenswrapper[4798]: I0203 00:27:18.960182 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-74fd7d6bb7-l7bs9" event={"ID":"e199491f-678a-4db6-9604-578e5d834b84","Type":"ContainerStarted","Data":"de67f34448b5983df911bf7e56e036e40ebe763e6e802dfc0f53e8a3e8179fe2"} Feb 03 00:27:18 crc kubenswrapper[4798]: I0203 00:27:18.977879 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8cc945bb6-42k7l" podStartSLOduration=3.183989299 podStartE2EDuration="12.977855163s" podCreationTimestamp="2026-02-03 00:27:06 +0000 UTC" firstStartedPulling="2026-02-03 00:27:08.237421356 +0000 UTC m=+720.003411367" lastFinishedPulling="2026-02-03 00:27:18.03128722 +0000 UTC m=+729.797277231" observedRunningTime="2026-02-03 00:27:18.972513992 +0000 UTC m=+730.738504013" watchObservedRunningTime="2026-02-03 00:27:18.977855163 +0000 UTC m=+730.743845174" Feb 03 00:27:19 crc kubenswrapper[4798]: I0203 00:27:19.028147 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-tp6c4" podStartSLOduration=3.356520801 podStartE2EDuration="13.028129166s" podCreationTimestamp="2026-02-03 00:27:06 +0000 UTC" firstStartedPulling="2026-02-03 00:27:08.354423937 +0000 UTC m=+720.120413948" lastFinishedPulling="2026-02-03 00:27:18.026032302 +0000 UTC m=+729.792022313" observedRunningTime="2026-02-03 00:27:19.026087953 +0000 UTC m=+730.792077974" watchObservedRunningTime="2026-02-03 00:27:19.028129166 +0000 UTC m=+730.794119177" Feb 03 00:27:19 crc kubenswrapper[4798]: I0203 00:27:19.053373 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-wpfwr" podStartSLOduration=3.229177058 podStartE2EDuration="13.053357771s" podCreationTimestamp="2026-02-03 00:27:06 +0000 UTC" firstStartedPulling="2026-02-03 00:27:08.249920325 +0000 UTC m=+720.015910336" lastFinishedPulling="2026-02-03 00:27:18.074101038 +0000 UTC m=+729.840091049" observedRunningTime="2026-02-03 00:27:19.04837852 +0000 UTC m=+730.814368531" watchObservedRunningTime="2026-02-03 00:27:19.053357771 +0000 UTC m=+730.819347782" Feb 03 00:27:19 crc kubenswrapper[4798]: I0203 00:27:19.066505 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8cc945bb6-bm7fg" podStartSLOduration=3.229561199 podStartE2EDuration="13.066490407s" podCreationTimestamp="2026-02-03 00:27:06 +0000 UTC" firstStartedPulling="2026-02-03 00:27:08.236134172 +0000 UTC m=+720.002124193" lastFinishedPulling="2026-02-03 00:27:18.07306339 +0000 UTC m=+729.839053401" observedRunningTime="2026-02-03 00:27:19.064196566 +0000 UTC m=+730.830186567" watchObservedRunningTime="2026-02-03 00:27:19.066490407 +0000 UTC m=+730.832480418" Feb 03 00:27:19 crc kubenswrapper[4798]: I0203 00:27:19.084001 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-dlssk" podStartSLOduration=3.390228188 podStartE2EDuration="13.083987067s" podCreationTimestamp="2026-02-03 00:27:06 +0000 UTC" firstStartedPulling="2026-02-03 00:27:08.331323818 +0000 UTC m=+720.097313829" lastFinishedPulling="2026-02-03 00:27:18.025082687 +0000 UTC m=+729.791072708" observedRunningTime="2026-02-03 00:27:19.081498841 +0000 UTC m=+730.847488852" watchObservedRunningTime="2026-02-03 00:27:19.083987067 +0000 UTC m=+730.849977078" Feb 03 00:27:23 crc kubenswrapper[4798]: I0203 00:27:23.992235 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-74fd7d6bb7-l7bs9" event={"ID":"e199491f-678a-4db6-9604-578e5d834b84","Type":"ContainerStarted","Data":"6a14a77ee428fb7e4b21e26f045abb94f6068ec02f7af52058242624ef533dbf"} Feb 03 00:27:24 crc kubenswrapper[4798]: I0203 00:27:24.017868 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-74fd7d6bb7-l7bs9" podStartSLOduration=7.204851269 podStartE2EDuration="12.017847112s" podCreationTimestamp="2026-02-03 00:27:12 +0000 UTC" firstStartedPulling="2026-02-03 00:27:18.384473919 +0000 UTC m=+730.150463930" lastFinishedPulling="2026-02-03 00:27:23.197469762 +0000 UTC m=+734.963459773" observedRunningTime="2026-02-03 00:27:24.012957863 +0000 UTC m=+735.778947884" watchObservedRunningTime="2026-02-03 00:27:24.017847112 +0000 UTC m=+735.783837123" Feb 03 00:27:26 crc kubenswrapper[4798]: I0203 00:27:26.975505 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-dlssk" Feb 03 00:27:27 crc kubenswrapper[4798]: I0203 00:27:27.104364 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-kh2sd"] Feb 03 00:27:27 crc kubenswrapper[4798]: I0203 00:27:27.105405 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-kh2sd" Feb 03 00:27:27 crc kubenswrapper[4798]: I0203 00:27:27.107450 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Feb 03 00:27:27 crc kubenswrapper[4798]: I0203 00:27:27.108118 4798 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-b57cd" Feb 03 00:27:27 crc kubenswrapper[4798]: I0203 00:27:27.108430 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Feb 03 00:27:27 crc kubenswrapper[4798]: I0203 00:27:27.117776 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-kh2sd"] Feb 03 00:27:27 crc kubenswrapper[4798]: I0203 00:27:27.277502 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/50313f40-8e81-423b-b6cd-612eb1ae5bd2-tmp\") pod \"cert-manager-operator-controller-manager-5586865c96-kh2sd\" (UID: \"50313f40-8e81-423b-b6cd-612eb1ae5bd2\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-kh2sd" Feb 03 00:27:27 crc kubenswrapper[4798]: I0203 00:27:27.277624 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkcnq\" (UniqueName: \"kubernetes.io/projected/50313f40-8e81-423b-b6cd-612eb1ae5bd2-kube-api-access-wkcnq\") pod \"cert-manager-operator-controller-manager-5586865c96-kh2sd\" (UID: \"50313f40-8e81-423b-b6cd-612eb1ae5bd2\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-kh2sd" Feb 03 00:27:27 crc kubenswrapper[4798]: I0203 00:27:27.378682 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkcnq\" (UniqueName: \"kubernetes.io/projected/50313f40-8e81-423b-b6cd-612eb1ae5bd2-kube-api-access-wkcnq\") pod \"cert-manager-operator-controller-manager-5586865c96-kh2sd\" (UID: \"50313f40-8e81-423b-b6cd-612eb1ae5bd2\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-kh2sd" Feb 03 00:27:27 crc kubenswrapper[4798]: I0203 00:27:27.378751 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/50313f40-8e81-423b-b6cd-612eb1ae5bd2-tmp\") pod \"cert-manager-operator-controller-manager-5586865c96-kh2sd\" (UID: \"50313f40-8e81-423b-b6cd-612eb1ae5bd2\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-kh2sd" Feb 03 00:27:27 crc kubenswrapper[4798]: I0203 00:27:27.379206 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/50313f40-8e81-423b-b6cd-612eb1ae5bd2-tmp\") pod \"cert-manager-operator-controller-manager-5586865c96-kh2sd\" (UID: \"50313f40-8e81-423b-b6cd-612eb1ae5bd2\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-kh2sd" Feb 03 00:27:27 crc kubenswrapper[4798]: I0203 00:27:27.398589 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkcnq\" (UniqueName: \"kubernetes.io/projected/50313f40-8e81-423b-b6cd-612eb1ae5bd2-kube-api-access-wkcnq\") pod \"cert-manager-operator-controller-manager-5586865c96-kh2sd\" (UID: \"50313f40-8e81-423b-b6cd-612eb1ae5bd2\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-kh2sd" Feb 03 00:27:27 crc kubenswrapper[4798]: I0203 00:27:27.420346 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-kh2sd" Feb 03 00:27:27 crc kubenswrapper[4798]: I0203 00:27:27.658074 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-kh2sd"] Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.017118 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-kh2sd" event={"ID":"50313f40-8e81-423b-b6cd-612eb1ae5bd2","Type":"ContainerStarted","Data":"ad7be34501e461eea9906ff21b9636faa991cabb2139001174f80def32384ec6"} Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.076689 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.077668 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.080413 4798 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-remote-ca" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.086145 4798 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-internal-users" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.089352 4798 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-config" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.089961 4798 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-transport-certs" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.090344 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-scripts" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.090515 4798 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-http-certs-internal" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.091422 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-unicast-hosts" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.091561 4798 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-xpack-file-realm" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.093041 4798 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-dockercfg-grhjh" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.102650 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.188724 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.188768 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.188786 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.188805 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.188827 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.188858 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.188933 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.189025 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.189075 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.189180 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.189224 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.189264 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.189283 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.189334 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.189362 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.291011 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.291066 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.291118 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.291146 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.291177 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.291203 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.291228 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.291247 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.291298 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.291328 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.291351 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.291375 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.291405 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.291439 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.291464 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.291484 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.291732 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.292425 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.292697 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.293529 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.293760 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.294240 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.294780 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.296541 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.296573 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.296685 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.296770 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.296992 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.300884 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.314306 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.393248 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:27:28 crc kubenswrapper[4798]: I0203 00:27:28.855386 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 03 00:27:28 crc kubenswrapper[4798]: W0203 00:27:28.867528 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeee642f8_9cb1_4c5a_8f54_0f1826c3e2a8.slice/crio-e6d9c279be8f60c901f77e3ee16416fa8911f62e1df0211fc05826c533f53a6b WatchSource:0}: Error finding container e6d9c279be8f60c901f77e3ee16416fa8911f62e1df0211fc05826c533f53a6b: Status 404 returned error can't find the container with id e6d9c279be8f60c901f77e3ee16416fa8911f62e1df0211fc05826c533f53a6b Feb 03 00:27:29 crc kubenswrapper[4798]: I0203 00:27:29.053236 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8","Type":"ContainerStarted","Data":"e6d9c279be8f60c901f77e3ee16416fa8911f62e1df0211fc05826c533f53a6b"} Feb 03 00:27:32 crc kubenswrapper[4798]: I0203 00:27:32.077183 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-kh2sd" event={"ID":"50313f40-8e81-423b-b6cd-612eb1ae5bd2","Type":"ContainerStarted","Data":"31b519700c929f4eac1f1e03c6d7ed24d81397a2d5df0ac6a5d493896d775e04"} Feb 03 00:27:32 crc kubenswrapper[4798]: I0203 00:27:32.109537 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-kh2sd" podStartSLOduration=1.44252097 podStartE2EDuration="5.109514379s" podCreationTimestamp="2026-02-03 00:27:27 +0000 UTC" firstStartedPulling="2026-02-03 00:27:27.674083687 +0000 UTC m=+739.440073698" lastFinishedPulling="2026-02-03 00:27:31.341077096 +0000 UTC m=+743.107067107" observedRunningTime="2026-02-03 00:27:32.106893699 +0000 UTC m=+743.872883710" watchObservedRunningTime="2026-02-03 00:27:32.109514379 +0000 UTC m=+743.875504390" Feb 03 00:27:34 crc kubenswrapper[4798]: I0203 00:27:34.951360 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-l5zfv"] Feb 03 00:27:34 crc kubenswrapper[4798]: I0203 00:27:34.952569 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-l5zfv" Feb 03 00:27:34 crc kubenswrapper[4798]: I0203 00:27:34.954996 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 03 00:27:34 crc kubenswrapper[4798]: I0203 00:27:34.956924 4798 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-cnbnc" Feb 03 00:27:34 crc kubenswrapper[4798]: I0203 00:27:34.956982 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 03 00:27:34 crc kubenswrapper[4798]: I0203 00:27:34.969726 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-l5zfv"] Feb 03 00:27:35 crc kubenswrapper[4798]: I0203 00:27:35.095492 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/106aaf86-eb7b-4039-a9f8-24107a7d0f0b-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-l5zfv\" (UID: \"106aaf86-eb7b-4039-a9f8-24107a7d0f0b\") " pod="cert-manager/cert-manager-webhook-6888856db4-l5zfv" Feb 03 00:27:35 crc kubenswrapper[4798]: I0203 00:27:35.095697 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p2d2\" (UniqueName: \"kubernetes.io/projected/106aaf86-eb7b-4039-a9f8-24107a7d0f0b-kube-api-access-9p2d2\") pod \"cert-manager-webhook-6888856db4-l5zfv\" (UID: \"106aaf86-eb7b-4039-a9f8-24107a7d0f0b\") " pod="cert-manager/cert-manager-webhook-6888856db4-l5zfv" Feb 03 00:27:35 crc kubenswrapper[4798]: I0203 00:27:35.197187 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p2d2\" (UniqueName: \"kubernetes.io/projected/106aaf86-eb7b-4039-a9f8-24107a7d0f0b-kube-api-access-9p2d2\") pod \"cert-manager-webhook-6888856db4-l5zfv\" (UID: \"106aaf86-eb7b-4039-a9f8-24107a7d0f0b\") " pod="cert-manager/cert-manager-webhook-6888856db4-l5zfv" Feb 03 00:27:35 crc kubenswrapper[4798]: I0203 00:27:35.197267 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/106aaf86-eb7b-4039-a9f8-24107a7d0f0b-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-l5zfv\" (UID: \"106aaf86-eb7b-4039-a9f8-24107a7d0f0b\") " pod="cert-manager/cert-manager-webhook-6888856db4-l5zfv" Feb 03 00:27:35 crc kubenswrapper[4798]: I0203 00:27:35.224738 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p2d2\" (UniqueName: \"kubernetes.io/projected/106aaf86-eb7b-4039-a9f8-24107a7d0f0b-kube-api-access-9p2d2\") pod \"cert-manager-webhook-6888856db4-l5zfv\" (UID: \"106aaf86-eb7b-4039-a9f8-24107a7d0f0b\") " pod="cert-manager/cert-manager-webhook-6888856db4-l5zfv" Feb 03 00:27:35 crc kubenswrapper[4798]: I0203 00:27:35.225497 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/106aaf86-eb7b-4039-a9f8-24107a7d0f0b-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-l5zfv\" (UID: \"106aaf86-eb7b-4039-a9f8-24107a7d0f0b\") " pod="cert-manager/cert-manager-webhook-6888856db4-l5zfv" Feb 03 00:27:35 crc kubenswrapper[4798]: I0203 00:27:35.273164 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-l5zfv" Feb 03 00:27:38 crc kubenswrapper[4798]: I0203 00:27:38.500950 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-qm6qm"] Feb 03 00:27:38 crc kubenswrapper[4798]: I0203 00:27:38.502868 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-qm6qm" Feb 03 00:27:38 crc kubenswrapper[4798]: I0203 00:27:38.526892 4798 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-rlchw" Feb 03 00:27:38 crc kubenswrapper[4798]: I0203 00:27:38.529932 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-qm6qm"] Feb 03 00:27:38 crc kubenswrapper[4798]: I0203 00:27:38.580473 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-l5zfv"] Feb 03 00:27:38 crc kubenswrapper[4798]: I0203 00:27:38.646627 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wm7c\" (UniqueName: \"kubernetes.io/projected/cdb1b01d-b0e6-411b-98d4-794ff69012ed-kube-api-access-5wm7c\") pod \"cert-manager-cainjector-5545bd876-qm6qm\" (UID: \"cdb1b01d-b0e6-411b-98d4-794ff69012ed\") " pod="cert-manager/cert-manager-cainjector-5545bd876-qm6qm" Feb 03 00:27:38 crc kubenswrapper[4798]: I0203 00:27:38.646712 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cdb1b01d-b0e6-411b-98d4-794ff69012ed-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-qm6qm\" (UID: \"cdb1b01d-b0e6-411b-98d4-794ff69012ed\") " pod="cert-manager/cert-manager-cainjector-5545bd876-qm6qm" Feb 03 00:27:38 crc kubenswrapper[4798]: I0203 00:27:38.747382 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wm7c\" (UniqueName: \"kubernetes.io/projected/cdb1b01d-b0e6-411b-98d4-794ff69012ed-kube-api-access-5wm7c\") pod \"cert-manager-cainjector-5545bd876-qm6qm\" (UID: \"cdb1b01d-b0e6-411b-98d4-794ff69012ed\") " pod="cert-manager/cert-manager-cainjector-5545bd876-qm6qm" Feb 03 00:27:38 crc kubenswrapper[4798]: I0203 00:27:38.747449 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cdb1b01d-b0e6-411b-98d4-794ff69012ed-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-qm6qm\" (UID: \"cdb1b01d-b0e6-411b-98d4-794ff69012ed\") " pod="cert-manager/cert-manager-cainjector-5545bd876-qm6qm" Feb 03 00:27:38 crc kubenswrapper[4798]: I0203 00:27:38.771941 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cdb1b01d-b0e6-411b-98d4-794ff69012ed-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-qm6qm\" (UID: \"cdb1b01d-b0e6-411b-98d4-794ff69012ed\") " pod="cert-manager/cert-manager-cainjector-5545bd876-qm6qm" Feb 03 00:27:38 crc kubenswrapper[4798]: I0203 00:27:38.775173 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wm7c\" (UniqueName: \"kubernetes.io/projected/cdb1b01d-b0e6-411b-98d4-794ff69012ed-kube-api-access-5wm7c\") pod \"cert-manager-cainjector-5545bd876-qm6qm\" (UID: \"cdb1b01d-b0e6-411b-98d4-794ff69012ed\") " pod="cert-manager/cert-manager-cainjector-5545bd876-qm6qm" Feb 03 00:27:38 crc kubenswrapper[4798]: I0203 00:27:38.840266 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-qm6qm" Feb 03 00:27:39 crc kubenswrapper[4798]: I0203 00:27:39.120980 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-l5zfv" event={"ID":"106aaf86-eb7b-4039-a9f8-24107a7d0f0b","Type":"ContainerStarted","Data":"d221dbda0539064ec13ce15b11d7c209973b890f4be5f7e325459aadf51f2e1f"} Feb 03 00:27:39 crc kubenswrapper[4798]: I0203 00:27:39.121727 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-qm6qm"] Feb 03 00:27:39 crc kubenswrapper[4798]: W0203 00:27:39.146098 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcdb1b01d_b0e6_411b_98d4_794ff69012ed.slice/crio-55256534ef8db2416fbef7ec87125cc202db56854299831b7a9cf232bc4fb1c7 WatchSource:0}: Error finding container 55256534ef8db2416fbef7ec87125cc202db56854299831b7a9cf232bc4fb1c7: Status 404 returned error can't find the container with id 55256534ef8db2416fbef7ec87125cc202db56854299831b7a9cf232bc4fb1c7 Feb 03 00:27:40 crc kubenswrapper[4798]: I0203 00:27:40.131097 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-qm6qm" event={"ID":"cdb1b01d-b0e6-411b-98d4-794ff69012ed","Type":"ContainerStarted","Data":"55256534ef8db2416fbef7ec87125cc202db56854299831b7a9cf232bc4fb1c7"} Feb 03 00:27:43 crc kubenswrapper[4798]: I0203 00:27:43.055574 4798 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 03 00:27:46 crc kubenswrapper[4798]: I0203 00:27:46.417606 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-j9bhv"] Feb 03 00:27:46 crc kubenswrapper[4798]: I0203 00:27:46.418588 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-j9bhv" Feb 03 00:27:46 crc kubenswrapper[4798]: I0203 00:27:46.426542 4798 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-2gc7z" Feb 03 00:27:46 crc kubenswrapper[4798]: I0203 00:27:46.431478 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-j9bhv"] Feb 03 00:27:46 crc kubenswrapper[4798]: I0203 00:27:46.566530 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e0434cd3-e97a-4434-919d-258cf482b4cb-bound-sa-token\") pod \"cert-manager-545d4d4674-j9bhv\" (UID: \"e0434cd3-e97a-4434-919d-258cf482b4cb\") " pod="cert-manager/cert-manager-545d4d4674-j9bhv" Feb 03 00:27:46 crc kubenswrapper[4798]: I0203 00:27:46.566833 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncnp9\" (UniqueName: \"kubernetes.io/projected/e0434cd3-e97a-4434-919d-258cf482b4cb-kube-api-access-ncnp9\") pod \"cert-manager-545d4d4674-j9bhv\" (UID: \"e0434cd3-e97a-4434-919d-258cf482b4cb\") " pod="cert-manager/cert-manager-545d4d4674-j9bhv" Feb 03 00:27:46 crc kubenswrapper[4798]: I0203 00:27:46.668318 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncnp9\" (UniqueName: \"kubernetes.io/projected/e0434cd3-e97a-4434-919d-258cf482b4cb-kube-api-access-ncnp9\") pod \"cert-manager-545d4d4674-j9bhv\" (UID: \"e0434cd3-e97a-4434-919d-258cf482b4cb\") " pod="cert-manager/cert-manager-545d4d4674-j9bhv" Feb 03 00:27:46 crc kubenswrapper[4798]: I0203 00:27:46.668419 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e0434cd3-e97a-4434-919d-258cf482b4cb-bound-sa-token\") pod \"cert-manager-545d4d4674-j9bhv\" (UID: \"e0434cd3-e97a-4434-919d-258cf482b4cb\") " pod="cert-manager/cert-manager-545d4d4674-j9bhv" Feb 03 00:27:46 crc kubenswrapper[4798]: I0203 00:27:46.700795 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncnp9\" (UniqueName: \"kubernetes.io/projected/e0434cd3-e97a-4434-919d-258cf482b4cb-kube-api-access-ncnp9\") pod \"cert-manager-545d4d4674-j9bhv\" (UID: \"e0434cd3-e97a-4434-919d-258cf482b4cb\") " pod="cert-manager/cert-manager-545d4d4674-j9bhv" Feb 03 00:27:46 crc kubenswrapper[4798]: I0203 00:27:46.704350 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e0434cd3-e97a-4434-919d-258cf482b4cb-bound-sa-token\") pod \"cert-manager-545d4d4674-j9bhv\" (UID: \"e0434cd3-e97a-4434-919d-258cf482b4cb\") " pod="cert-manager/cert-manager-545d4d4674-j9bhv" Feb 03 00:27:46 crc kubenswrapper[4798]: I0203 00:27:46.737286 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-j9bhv" Feb 03 00:27:52 crc kubenswrapper[4798]: E0203 00:27:52.233240 4798 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" Feb 03 00:27:52 crc kubenswrapper[4798]: E0203 00:27:52.233818 4798 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cert-manager-webhook,Image:registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671,Command:[/app/cmd/webhook/webhook],Args:[--dynamic-serving-ca-secret-name=cert-manager-webhook-ca --dynamic-serving-ca-secret-namespace=$(POD_NAMESPACE) --dynamic-serving-dns-names=cert-manager-webhook,cert-manager-webhook.$(POD_NAMESPACE),cert-manager-webhook.$(POD_NAMESPACE).svc --secure-port=10250 --v=2],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:10250,Protocol:TCP,HostIP:,},ContainerPort{Name:healthcheck,HostPort:0,ContainerPort:6080,Protocol:TCP,HostIP:,},ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:9402,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bound-sa-token,ReadOnly:true,MountPath:/var/run/secrets/openshift/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9p2d2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 healthcheck},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{1 0 healthcheck},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000690000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cert-manager-webhook-6888856db4-l5zfv_cert-manager(106aaf86-eb7b-4039-a9f8-24107a7d0f0b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 00:27:52 crc kubenswrapper[4798]: E0203 00:27:52.235164 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="cert-manager/cert-manager-webhook-6888856db4-l5zfv" podUID="106aaf86-eb7b-4039-a9f8-24107a7d0f0b" Feb 03 00:27:52 crc kubenswrapper[4798]: E0203 00:27:52.311349 4798 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" Feb 03 00:27:52 crc kubenswrapper[4798]: E0203 00:27:52.311488 4798 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cert-manager-cainjector,Image:registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671,Command:[/app/cmd/cainjector/cainjector],Args:[--leader-election-namespace=kube-system --v=2],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:9402,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bound-sa-token,ReadOnly:true,MountPath:/var/run/secrets/openshift/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5wm7c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000690000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cert-manager-cainjector-5545bd876-qm6qm_cert-manager(cdb1b01d-b0e6-411b-98d4-794ff69012ed): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 00:27:52 crc kubenswrapper[4798]: E0203 00:27:52.312643 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-cainjector\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="cert-manager/cert-manager-cainjector-5545bd876-qm6qm" podUID="cdb1b01d-b0e6-411b-98d4-794ff69012ed" Feb 03 00:27:52 crc kubenswrapper[4798]: E0203 00:27:52.588795 4798 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="registry.connect.redhat.com/elastic/elasticsearch:7.17.20" Feb 03 00:27:52 crc kubenswrapper[4798]: E0203 00:27:52.589266 4798 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:elastic-internal-init-filesystem,Image:registry.connect.redhat.com/elastic/elasticsearch:7.17.20,Command:[bash -c /mnt/elastic-internal/scripts/prepare-fs.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:HEADLESS_SERVICE_NAME,Value:elasticsearch-es-default,ValueFrom:nil,},EnvVar{Name:PROBE_PASSWORD_PATH,Value:/mnt/elastic-internal/pod-mounted-users/elastic-internal-probe,ValueFrom:nil,},EnvVar{Name:PROBE_USERNAME,Value:elastic-internal-probe,ValueFrom:nil,},EnvVar{Name:READINESS_PROBE_PROTOCOL,Value:https,ValueFrom:nil,},EnvVar{Name:NSS_SDB_USE_CACHE,Value:no,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:downward-api,ReadOnly:true,MountPath:/mnt/elastic-internal/downward-api,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-bin-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-bin-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-config,ReadOnly:true,MountPath:/mnt/elastic-internal/elasticsearch-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-config-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-config-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-plugins-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-plugins-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-http-certificates,ReadOnly:true,MountPath:/usr/share/elasticsearch/config/http-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-probe-user,ReadOnly:true,MountPath:/mnt/elastic-internal/pod-mounted-users,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-remote-certificate-authorities,ReadOnly:true,MountPath:/usr/share/elasticsearch/config/transport-remote-certs/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-scripts,ReadOnly:true,MountPath:/mnt/elastic-internal/scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-transport-certificates,ReadOnly:true,MountPath:/mnt/elastic-internal/transport-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-unicast-hosts,ReadOnly:true,MountPath:/mnt/elastic-internal/unicast-hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-xpack-file-realm,ReadOnly:true,MountPath:/mnt/elastic-internal/xpack-file-realm,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elasticsearch-data,ReadOnly:false,MountPath:/usr/share/elasticsearch/data,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elasticsearch-logs,ReadOnly:false,MountPath:/usr/share/elasticsearch/logs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-volume,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod elasticsearch-es-default-0_service-telemetry(eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 00:27:52 crc kubenswrapper[4798]: E0203 00:27:52.590527 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8" Feb 03 00:27:52 crc kubenswrapper[4798]: I0203 00:27:52.920633 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-j9bhv"] Feb 03 00:27:53 crc kubenswrapper[4798]: I0203 00:27:53.212981 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-j9bhv" event={"ID":"e0434cd3-e97a-4434-919d-258cf482b4cb","Type":"ContainerStarted","Data":"20541356017af4e2323f351f2c799578389d310edb6ea394ee46ad8893aa4dab"} Feb 03 00:27:53 crc kubenswrapper[4798]: E0203 00:27:53.215784 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-webhook\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671\\\"\"" pod="cert-manager/cert-manager-webhook-6888856db4-l5zfv" podUID="106aaf86-eb7b-4039-a9f8-24107a7d0f0b" Feb 03 00:27:53 crc kubenswrapper[4798]: E0203 00:27:53.215929 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8" Feb 03 00:27:53 crc kubenswrapper[4798]: E0203 00:27:53.216202 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-cainjector\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671\\\"\"" pod="cert-manager/cert-manager-cainjector-5545bd876-qm6qm" podUID="cdb1b01d-b0e6-411b-98d4-794ff69012ed" Feb 03 00:27:53 crc kubenswrapper[4798]: I0203 00:27:53.554415 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 03 00:27:53 crc kubenswrapper[4798]: I0203 00:27:53.592976 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 03 00:27:54 crc kubenswrapper[4798]: E0203 00:27:54.222617 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8" Feb 03 00:27:55 crc kubenswrapper[4798]: I0203 00:27:55.229121 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-j9bhv" event={"ID":"e0434cd3-e97a-4434-919d-258cf482b4cb","Type":"ContainerStarted","Data":"d62d6b91cee4b4cb84cd4d73bc45ca472638ad41d68bb43d6ed03cc94643f946"} Feb 03 00:27:55 crc kubenswrapper[4798]: E0203 00:27:55.231583 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8" Feb 03 00:27:55 crc kubenswrapper[4798]: I0203 00:27:55.249264 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-j9bhv" podStartSLOduration=7.808936237 podStartE2EDuration="9.249240488s" podCreationTimestamp="2026-02-03 00:27:46 +0000 UTC" firstStartedPulling="2026-02-03 00:27:52.922285461 +0000 UTC m=+764.688275512" lastFinishedPulling="2026-02-03 00:27:54.362589752 +0000 UTC m=+766.128579763" observedRunningTime="2026-02-03 00:27:55.245865428 +0000 UTC m=+767.011855479" watchObservedRunningTime="2026-02-03 00:27:55.249240488 +0000 UTC m=+767.015230539" Feb 03 00:28:04 crc kubenswrapper[4798]: I0203 00:28:04.290645 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-l5zfv" event={"ID":"106aaf86-eb7b-4039-a9f8-24107a7d0f0b","Type":"ContainerStarted","Data":"053e7af31c5cf3bfca3262079c297f54dec6f77322d9ae39f6ea738c69490a4b"} Feb 03 00:28:04 crc kubenswrapper[4798]: I0203 00:28:04.291400 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-l5zfv" Feb 03 00:28:04 crc kubenswrapper[4798]: I0203 00:28:04.306673 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-l5zfv" podStartSLOduration=-9223372006.54814 podStartE2EDuration="30.306636872s" podCreationTimestamp="2026-02-03 00:27:34 +0000 UTC" firstStartedPulling="2026-02-03 00:27:38.614753496 +0000 UTC m=+750.380743507" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:28:04.304226788 +0000 UTC m=+776.070216799" watchObservedRunningTime="2026-02-03 00:28:04.306636872 +0000 UTC m=+776.072626883" Feb 03 00:28:09 crc kubenswrapper[4798]: I0203 00:28:09.319960 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-qm6qm" event={"ID":"cdb1b01d-b0e6-411b-98d4-794ff69012ed","Type":"ContainerStarted","Data":"54b00b505f327ec71638697fa658deb25113a653c5cfc7f78929f048bf87a940"} Feb 03 00:28:10 crc kubenswrapper[4798]: I0203 00:28:10.275755 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-l5zfv" Feb 03 00:28:10 crc kubenswrapper[4798]: I0203 00:28:10.295646 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-qm6qm" podStartSLOduration=-9223372004.55916 podStartE2EDuration="32.295615762s" podCreationTimestamp="2026-02-03 00:27:38 +0000 UTC" firstStartedPulling="2026-02-03 00:27:39.148058227 +0000 UTC m=+750.914048238" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:28:09.343887519 +0000 UTC m=+781.109877530" watchObservedRunningTime="2026-02-03 00:28:10.295615762 +0000 UTC m=+782.061605803" Feb 03 00:28:11 crc kubenswrapper[4798]: I0203 00:28:11.336782 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8","Type":"ContainerStarted","Data":"68ff3b9e680c11b8f5067ad43da9b7f3a5d71038d27fb15aad0bffd95105c933"} Feb 03 00:28:12 crc kubenswrapper[4798]: I0203 00:28:12.344449 4798 generic.go:334] "Generic (PLEG): container finished" podID="eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8" containerID="68ff3b9e680c11b8f5067ad43da9b7f3a5d71038d27fb15aad0bffd95105c933" exitCode=0 Feb 03 00:28:12 crc kubenswrapper[4798]: I0203 00:28:12.344499 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8","Type":"ContainerDied","Data":"68ff3b9e680c11b8f5067ad43da9b7f3a5d71038d27fb15aad0bffd95105c933"} Feb 03 00:28:13 crc kubenswrapper[4798]: I0203 00:28:13.352905 4798 generic.go:334] "Generic (PLEG): container finished" podID="eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8" containerID="821284f513fc39fb006decc3b8f8d67188d87320bb75470b665cc6dbe202261b" exitCode=0 Feb 03 00:28:13 crc kubenswrapper[4798]: I0203 00:28:13.353415 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8","Type":"ContainerDied","Data":"821284f513fc39fb006decc3b8f8d67188d87320bb75470b665cc6dbe202261b"} Feb 03 00:28:13 crc kubenswrapper[4798]: I0203 00:28:13.866260 4798 patch_prober.go:28] interesting pod/machine-config-daemon-b842j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 00:28:13 crc kubenswrapper[4798]: I0203 00:28:13.866340 4798 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b842j" podUID="c6602c86-f236-4772-b70f-a8b4847b95dd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 00:28:14 crc kubenswrapper[4798]: I0203 00:28:14.362269 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8","Type":"ContainerStarted","Data":"718d05f0de11c3952d6dc0301607a8f14f6b760dc8e1fb62594b235ca66105a8"} Feb 03 00:28:14 crc kubenswrapper[4798]: I0203 00:28:14.362855 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:28:14 crc kubenswrapper[4798]: I0203 00:28:14.392257 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=4.841171685 podStartE2EDuration="46.392239642s" podCreationTimestamp="2026-02-03 00:27:28 +0000 UTC" firstStartedPulling="2026-02-03 00:27:28.871848224 +0000 UTC m=+740.637838235" lastFinishedPulling="2026-02-03 00:28:10.422916181 +0000 UTC m=+782.188906192" observedRunningTime="2026-02-03 00:28:14.392033287 +0000 UTC m=+786.158023308" watchObservedRunningTime="2026-02-03 00:28:14.392239642 +0000 UTC m=+786.158229653" Feb 03 00:28:19 crc kubenswrapper[4798]: I0203 00:28:19.940356 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Feb 03 00:28:19 crc kubenswrapper[4798]: I0203 00:28:19.942070 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 03 00:28:19 crc kubenswrapper[4798]: I0203 00:28:19.944770 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-1-ca" Feb 03 00:28:19 crc kubenswrapper[4798]: I0203 00:28:19.944914 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-1-sys-config" Feb 03 00:28:19 crc kubenswrapper[4798]: I0203 00:28:19.944962 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-1-global-ca" Feb 03 00:28:19 crc kubenswrapper[4798]: I0203 00:28:19.944979 4798 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-h6dxt" Feb 03 00:28:19 crc kubenswrapper[4798]: I0203 00:28:19.954837 4798 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-framework-index-dockercfg" Feb 03 00:28:19 crc kubenswrapper[4798]: I0203 00:28:19.994262 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 03 00:28:19 crc kubenswrapper[4798]: I0203 00:28:19.994311 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 03 00:28:19 crc kubenswrapper[4798]: I0203 00:28:19.994342 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-h6dxt-push\" (UniqueName: \"kubernetes.io/secret/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-builder-dockercfg-h6dxt-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 03 00:28:19 crc kubenswrapper[4798]: I0203 00:28:19.994375 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 03 00:28:19 crc kubenswrapper[4798]: I0203 00:28:19.994403 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjj5x\" (UniqueName: \"kubernetes.io/projected/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-kube-api-access-xjj5x\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 03 00:28:19 crc kubenswrapper[4798]: I0203 00:28:19.994433 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 03 00:28:19 crc kubenswrapper[4798]: I0203 00:28:19.994455 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-h6dxt-pull\" (UniqueName: \"kubernetes.io/secret/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-builder-dockercfg-h6dxt-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 03 00:28:19 crc kubenswrapper[4798]: I0203 00:28:19.994471 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 03 00:28:19 crc kubenswrapper[4798]: I0203 00:28:19.994488 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 03 00:28:19 crc kubenswrapper[4798]: I0203 00:28:19.994510 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 03 00:28:19 crc kubenswrapper[4798]: I0203 00:28:19.994528 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 03 00:28:19 crc kubenswrapper[4798]: I0203 00:28:19.994701 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 03 00:28:19 crc kubenswrapper[4798]: I0203 00:28:19.994782 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 03 00:28:20 crc kubenswrapper[4798]: I0203 00:28:20.019710 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Feb 03 00:28:20 crc kubenswrapper[4798]: I0203 00:28:20.095515 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 03 00:28:20 crc kubenswrapper[4798]: I0203 00:28:20.095582 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 03 00:28:20 crc kubenswrapper[4798]: I0203 00:28:20.095616 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 03 00:28:20 crc kubenswrapper[4798]: I0203 00:28:20.095646 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 03 00:28:20 crc kubenswrapper[4798]: I0203 00:28:20.095701 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-h6dxt-push\" (UniqueName: \"kubernetes.io/secret/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-builder-dockercfg-h6dxt-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 03 00:28:20 crc kubenswrapper[4798]: I0203 00:28:20.095815 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 03 00:28:20 crc kubenswrapper[4798]: I0203 00:28:20.095726 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 03 00:28:20 crc kubenswrapper[4798]: I0203 00:28:20.095928 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjj5x\" (UniqueName: \"kubernetes.io/projected/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-kube-api-access-xjj5x\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 03 00:28:20 crc kubenswrapper[4798]: I0203 00:28:20.095986 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 03 00:28:20 crc kubenswrapper[4798]: I0203 00:28:20.096055 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-h6dxt-pull\" (UniqueName: \"kubernetes.io/secret/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-builder-dockercfg-h6dxt-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 03 00:28:20 crc kubenswrapper[4798]: I0203 00:28:20.096075 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 03 00:28:20 crc kubenswrapper[4798]: I0203 00:28:20.096096 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 03 00:28:20 crc kubenswrapper[4798]: I0203 00:28:20.096126 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 03 00:28:20 crc kubenswrapper[4798]: I0203 00:28:20.096127 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 03 00:28:20 crc kubenswrapper[4798]: I0203 00:28:20.096141 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 03 00:28:20 crc kubenswrapper[4798]: I0203 00:28:20.096155 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 03 00:28:20 crc kubenswrapper[4798]: I0203 00:28:20.096210 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 03 00:28:20 crc kubenswrapper[4798]: I0203 00:28:20.096562 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 03 00:28:20 crc kubenswrapper[4798]: I0203 00:28:20.096585 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 03 00:28:20 crc kubenswrapper[4798]: I0203 00:28:20.096612 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 03 00:28:20 crc kubenswrapper[4798]: I0203 00:28:20.096912 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 03 00:28:20 crc kubenswrapper[4798]: I0203 00:28:20.097194 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 03 00:28:20 crc kubenswrapper[4798]: I0203 00:28:20.102237 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-h6dxt-pull\" (UniqueName: \"kubernetes.io/secret/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-builder-dockercfg-h6dxt-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 03 00:28:20 crc kubenswrapper[4798]: I0203 00:28:20.103323 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-h6dxt-push\" (UniqueName: \"kubernetes.io/secret/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-builder-dockercfg-h6dxt-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 03 00:28:20 crc kubenswrapper[4798]: I0203 00:28:20.112806 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 03 00:28:20 crc kubenswrapper[4798]: I0203 00:28:20.123472 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjj5x\" (UniqueName: \"kubernetes.io/projected/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-kube-api-access-xjj5x\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 03 00:28:20 crc kubenswrapper[4798]: I0203 00:28:20.255374 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 03 00:28:20 crc kubenswrapper[4798]: I0203 00:28:20.729858 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Feb 03 00:28:20 crc kubenswrapper[4798]: W0203 00:28:20.736303 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d9ff47d_d1bd_4be0_9c12_ea3b51d075d5.slice/crio-9c60c5028c96b26026b327d8d3c2fcdf86a3d1fcdcee8cff88472afeb628656a WatchSource:0}: Error finding container 9c60c5028c96b26026b327d8d3c2fcdf86a3d1fcdcee8cff88472afeb628656a: Status 404 returned error can't find the container with id 9c60c5028c96b26026b327d8d3c2fcdf86a3d1fcdcee8cff88472afeb628656a Feb 03 00:28:21 crc kubenswrapper[4798]: I0203 00:28:21.410827 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5","Type":"ContainerStarted","Data":"9c60c5028c96b26026b327d8d3c2fcdf86a3d1fcdcee8cff88472afeb628656a"} Feb 03 00:28:23 crc kubenswrapper[4798]: I0203 00:28:23.499108 4798 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8" containerName="elasticsearch" probeResult="failure" output=< Feb 03 00:28:23 crc kubenswrapper[4798]: {"timestamp": "2026-02-03T00:28:23+00:00", "message": "readiness probe failed", "curl_rc": "7"} Feb 03 00:28:23 crc kubenswrapper[4798]: > Feb 03 00:28:26 crc kubenswrapper[4798]: I0203 00:28:26.453270 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5","Type":"ContainerStarted","Data":"1bd9d2babf52ce1108841912dcd5906f0f1ee03de9fc54288b89a74ded209068"} Feb 03 00:28:28 crc kubenswrapper[4798]: I0203 00:28:28.467629 4798 generic.go:334] "Generic (PLEG): container finished" podID="8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5" containerID="1bd9d2babf52ce1108841912dcd5906f0f1ee03de9fc54288b89a74ded209068" exitCode=0 Feb 03 00:28:28 crc kubenswrapper[4798]: I0203 00:28:28.467690 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5","Type":"ContainerDied","Data":"1bd9d2babf52ce1108841912dcd5906f0f1ee03de9fc54288b89a74ded209068"} Feb 03 00:28:28 crc kubenswrapper[4798]: I0203 00:28:28.827483 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Feb 03 00:28:29 crc kubenswrapper[4798]: I0203 00:28:29.476024 4798 generic.go:334] "Generic (PLEG): container finished" podID="8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5" containerID="8bc9c5cd1854161ffc3bf5c0aa15446e97bbfe48775e7180d889f39ae25e09f7" exitCode=0 Feb 03 00:28:29 crc kubenswrapper[4798]: I0203 00:28:29.476068 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5","Type":"ContainerDied","Data":"8bc9c5cd1854161ffc3bf5c0aa15446e97bbfe48775e7180d889f39ae25e09f7"} Feb 03 00:28:29 crc kubenswrapper[4798]: I0203 00:28:29.520495 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5/manage-dockerfile/0.log" Feb 03 00:28:30 crc kubenswrapper[4798]: I0203 00:28:30.486822 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5","Type":"ContainerStarted","Data":"e5946fd8b6b6995d38da39b6afb97c56a1f9315fa7e2e35814ffd0c8e270e4aa"} Feb 03 00:28:30 crc kubenswrapper[4798]: I0203 00:28:30.529201 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-framework-index-1-build" podStartSLOduration=7.184198543 podStartE2EDuration="11.52918536s" podCreationTimestamp="2026-02-03 00:28:19 +0000 UTC" firstStartedPulling="2026-02-03 00:28:20.73833167 +0000 UTC m=+792.504321681" lastFinishedPulling="2026-02-03 00:28:25.083318487 +0000 UTC m=+796.849308498" observedRunningTime="2026-02-03 00:28:30.524454458 +0000 UTC m=+802.290444469" watchObservedRunningTime="2026-02-03 00:28:30.52918536 +0000 UTC m=+802.295175371" Feb 03 00:28:43 crc kubenswrapper[4798]: I0203 00:28:43.866730 4798 patch_prober.go:28] interesting pod/machine-config-daemon-b842j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 00:28:43 crc kubenswrapper[4798]: I0203 00:28:43.867356 4798 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b842j" podUID="c6602c86-f236-4772-b70f-a8b4847b95dd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 00:29:13 crc kubenswrapper[4798]: I0203 00:29:13.867726 4798 patch_prober.go:28] interesting pod/machine-config-daemon-b842j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 00:29:13 crc kubenswrapper[4798]: I0203 00:29:13.868361 4798 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b842j" podUID="c6602c86-f236-4772-b70f-a8b4847b95dd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 00:29:13 crc kubenswrapper[4798]: I0203 00:29:13.868418 4798 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b842j" Feb 03 00:29:13 crc kubenswrapper[4798]: I0203 00:29:13.869157 4798 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6897fa3c91ba906d1e45e4ba0d97338c3f5515d0f2babad3be934a41248b6f2b"} pod="openshift-machine-config-operator/machine-config-daemon-b842j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 00:29:13 crc kubenswrapper[4798]: I0203 00:29:13.869226 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b842j" podUID="c6602c86-f236-4772-b70f-a8b4847b95dd" containerName="machine-config-daemon" containerID="cri-o://6897fa3c91ba906d1e45e4ba0d97338c3f5515d0f2babad3be934a41248b6f2b" gracePeriod=600 Feb 03 00:29:14 crc kubenswrapper[4798]: I0203 00:29:14.754407 4798 generic.go:334] "Generic (PLEG): container finished" podID="c6602c86-f236-4772-b70f-a8b4847b95dd" containerID="6897fa3c91ba906d1e45e4ba0d97338c3f5515d0f2babad3be934a41248b6f2b" exitCode=0 Feb 03 00:29:14 crc kubenswrapper[4798]: I0203 00:29:14.754514 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" event={"ID":"c6602c86-f236-4772-b70f-a8b4847b95dd","Type":"ContainerDied","Data":"6897fa3c91ba906d1e45e4ba0d97338c3f5515d0f2babad3be934a41248b6f2b"} Feb 03 00:29:14 crc kubenswrapper[4798]: I0203 00:29:14.754838 4798 scope.go:117] "RemoveContainer" containerID="0f03b0766898d960db7dfb7d118abdc25b38ac734616e34343665c5d2613c7bb" Feb 03 00:29:15 crc kubenswrapper[4798]: I0203 00:29:15.766957 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" event={"ID":"c6602c86-f236-4772-b70f-a8b4847b95dd","Type":"ContainerStarted","Data":"efe22b7553ea1789aeef45bd5caacf46be6b6cbd47032c443a4e9ad6640e0650"} Feb 03 00:29:24 crc kubenswrapper[4798]: I0203 00:29:24.820637 4798 generic.go:334] "Generic (PLEG): container finished" podID="8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5" containerID="e5946fd8b6b6995d38da39b6afb97c56a1f9315fa7e2e35814ffd0c8e270e4aa" exitCode=0 Feb 03 00:29:24 crc kubenswrapper[4798]: I0203 00:29:24.820693 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5","Type":"ContainerDied","Data":"e5946fd8b6b6995d38da39b6afb97c56a1f9315fa7e2e35814ffd0c8e270e4aa"} Feb 03 00:29:26 crc kubenswrapper[4798]: I0203 00:29:26.073463 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 03 00:29:26 crc kubenswrapper[4798]: I0203 00:29:26.187609 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-buildworkdir\") pod \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " Feb 03 00:29:26 crc kubenswrapper[4798]: I0203 00:29:26.187736 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-build-ca-bundles\") pod \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " Feb 03 00:29:26 crc kubenswrapper[4798]: I0203 00:29:26.187782 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-h6dxt-push\" (UniqueName: \"kubernetes.io/secret/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-builder-dockercfg-h6dxt-push\") pod \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " Feb 03 00:29:26 crc kubenswrapper[4798]: I0203 00:29:26.187831 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-build-blob-cache\") pod \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " Feb 03 00:29:26 crc kubenswrapper[4798]: I0203 00:29:26.187880 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-container-storage-run\") pod \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " Feb 03 00:29:26 crc kubenswrapper[4798]: I0203 00:29:26.187962 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " Feb 03 00:29:26 crc kubenswrapper[4798]: I0203 00:29:26.187998 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-build-system-configs\") pod \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " Feb 03 00:29:26 crc kubenswrapper[4798]: I0203 00:29:26.188068 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjj5x\" (UniqueName: \"kubernetes.io/projected/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-kube-api-access-xjj5x\") pod \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " Feb 03 00:29:26 crc kubenswrapper[4798]: I0203 00:29:26.188105 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-container-storage-root\") pod \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " Feb 03 00:29:26 crc kubenswrapper[4798]: I0203 00:29:26.188152 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-node-pullsecrets\") pod \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " Feb 03 00:29:26 crc kubenswrapper[4798]: I0203 00:29:26.188198 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-build-proxy-ca-bundles\") pod \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " Feb 03 00:29:26 crc kubenswrapper[4798]: I0203 00:29:26.188237 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-buildcachedir\") pod \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " Feb 03 00:29:26 crc kubenswrapper[4798]: I0203 00:29:26.188267 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-h6dxt-pull\" (UniqueName: \"kubernetes.io/secret/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-builder-dockercfg-h6dxt-pull\") pod \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\" (UID: \"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5\") " Feb 03 00:29:26 crc kubenswrapper[4798]: I0203 00:29:26.188424 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5" (UID: "8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:29:26 crc kubenswrapper[4798]: I0203 00:29:26.188640 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5" (UID: "8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:29:26 crc kubenswrapper[4798]: I0203 00:29:26.188701 4798 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 03 00:29:26 crc kubenswrapper[4798]: I0203 00:29:26.188967 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5" (UID: "8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:29:26 crc kubenswrapper[4798]: I0203 00:29:26.189694 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5" (UID: "8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:29:26 crc kubenswrapper[4798]: I0203 00:29:26.189983 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5" (UID: "8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:29:26 crc kubenswrapper[4798]: I0203 00:29:26.190017 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5" (UID: "8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 00:29:26 crc kubenswrapper[4798]: I0203 00:29:26.190039 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5" (UID: "8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 00:29:26 crc kubenswrapper[4798]: I0203 00:29:26.194010 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-builder-dockercfg-h6dxt-pull" (OuterVolumeSpecName: "builder-dockercfg-h6dxt-pull") pod "8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5" (UID: "8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5"). InnerVolumeSpecName "builder-dockercfg-h6dxt-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:29:26 crc kubenswrapper[4798]: I0203 00:29:26.194016 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-kube-api-access-xjj5x" (OuterVolumeSpecName: "kube-api-access-xjj5x") pod "8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5" (UID: "8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5"). InnerVolumeSpecName "kube-api-access-xjj5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:29:26 crc kubenswrapper[4798]: I0203 00:29:26.194034 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5" (UID: "8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:29:26 crc kubenswrapper[4798]: I0203 00:29:26.195817 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-builder-dockercfg-h6dxt-push" (OuterVolumeSpecName: "builder-dockercfg-h6dxt-push") pod "8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5" (UID: "8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5"). InnerVolumeSpecName "builder-dockercfg-h6dxt-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:29:26 crc kubenswrapper[4798]: I0203 00:29:26.290079 4798 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 03 00:29:26 crc kubenswrapper[4798]: I0203 00:29:26.290389 4798 reconciler_common.go:293] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Feb 03 00:29:26 crc kubenswrapper[4798]: I0203 00:29:26.290403 4798 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 03 00:29:26 crc kubenswrapper[4798]: I0203 00:29:26.290415 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjj5x\" (UniqueName: \"kubernetes.io/projected/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-kube-api-access-xjj5x\") on node \"crc\" DevicePath \"\"" Feb 03 00:29:26 crc kubenswrapper[4798]: I0203 00:29:26.290427 4798 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 03 00:29:26 crc kubenswrapper[4798]: I0203 00:29:26.290439 4798 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 03 00:29:26 crc kubenswrapper[4798]: I0203 00:29:26.290450 4798 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 03 00:29:26 crc kubenswrapper[4798]: I0203 00:29:26.290461 4798 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-h6dxt-pull\" (UniqueName: \"kubernetes.io/secret/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-builder-dockercfg-h6dxt-pull\") on node \"crc\" DevicePath \"\"" Feb 03 00:29:26 crc kubenswrapper[4798]: I0203 00:29:26.290473 4798 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 03 00:29:26 crc kubenswrapper[4798]: I0203 00:29:26.290484 4798 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-h6dxt-push\" (UniqueName: \"kubernetes.io/secret/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-builder-dockercfg-h6dxt-push\") on node \"crc\" DevicePath \"\"" Feb 03 00:29:26 crc kubenswrapper[4798]: I0203 00:29:26.375487 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5" (UID: "8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:29:26 crc kubenswrapper[4798]: I0203 00:29:26.391601 4798 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 03 00:29:26 crc kubenswrapper[4798]: I0203 00:29:26.833828 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5","Type":"ContainerDied","Data":"9c60c5028c96b26026b327d8d3c2fcdf86a3d1fcdcee8cff88472afeb628656a"} Feb 03 00:29:26 crc kubenswrapper[4798]: I0203 00:29:26.833880 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 03 00:29:26 crc kubenswrapper[4798]: I0203 00:29:26.833893 4798 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c60c5028c96b26026b327d8d3c2fcdf86a3d1fcdcee8cff88472afeb628656a" Feb 03 00:29:27 crc kubenswrapper[4798]: I0203 00:29:27.500247 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-r4t9g"] Feb 03 00:29:27 crc kubenswrapper[4798]: E0203 00:29:27.500490 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5" containerName="manage-dockerfile" Feb 03 00:29:27 crc kubenswrapper[4798]: I0203 00:29:27.500505 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5" containerName="manage-dockerfile" Feb 03 00:29:27 crc kubenswrapper[4798]: E0203 00:29:27.500517 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5" containerName="docker-build" Feb 03 00:29:27 crc kubenswrapper[4798]: I0203 00:29:27.500524 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5" containerName="docker-build" Feb 03 00:29:27 crc kubenswrapper[4798]: E0203 00:29:27.500540 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5" containerName="git-clone" Feb 03 00:29:27 crc kubenswrapper[4798]: I0203 00:29:27.500550 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5" containerName="git-clone" Feb 03 00:29:27 crc kubenswrapper[4798]: I0203 00:29:27.500694 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5" containerName="docker-build" Feb 03 00:29:27 crc kubenswrapper[4798]: I0203 00:29:27.502343 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-r4t9g" Feb 03 00:29:27 crc kubenswrapper[4798]: I0203 00:29:27.505790 4798 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"infrawatch-operators-dockercfg-9h9zf" Feb 03 00:29:27 crc kubenswrapper[4798]: I0203 00:29:27.513693 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-r4t9g"] Feb 03 00:29:27 crc kubenswrapper[4798]: I0203 00:29:27.610470 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4xq7\" (UniqueName: \"kubernetes.io/projected/c11c128d-d808-479a-8e28-5dc21617377a-kube-api-access-z4xq7\") pod \"infrawatch-operators-r4t9g\" (UID: \"c11c128d-d808-479a-8e28-5dc21617377a\") " pod="service-telemetry/infrawatch-operators-r4t9g" Feb 03 00:29:27 crc kubenswrapper[4798]: I0203 00:29:27.712237 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4xq7\" (UniqueName: \"kubernetes.io/projected/c11c128d-d808-479a-8e28-5dc21617377a-kube-api-access-z4xq7\") pod \"infrawatch-operators-r4t9g\" (UID: \"c11c128d-d808-479a-8e28-5dc21617377a\") " pod="service-telemetry/infrawatch-operators-r4t9g" Feb 03 00:29:27 crc kubenswrapper[4798]: I0203 00:29:27.734711 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4xq7\" (UniqueName: \"kubernetes.io/projected/c11c128d-d808-479a-8e28-5dc21617377a-kube-api-access-z4xq7\") pod \"infrawatch-operators-r4t9g\" (UID: \"c11c128d-d808-479a-8e28-5dc21617377a\") " pod="service-telemetry/infrawatch-operators-r4t9g" Feb 03 00:29:27 crc kubenswrapper[4798]: I0203 00:29:27.826742 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-r4t9g" Feb 03 00:29:28 crc kubenswrapper[4798]: I0203 00:29:28.051683 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-r4t9g"] Feb 03 00:29:28 crc kubenswrapper[4798]: I0203 00:29:28.847077 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-r4t9g" event={"ID":"c11c128d-d808-479a-8e28-5dc21617377a","Type":"ContainerStarted","Data":"6990e71b96130ba7c5601b773ec2272d82379b30e3d72f1a3676cc64dc8531e9"} Feb 03 00:29:31 crc kubenswrapper[4798]: I0203 00:29:31.349759 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5" (UID: "8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:29:31 crc kubenswrapper[4798]: I0203 00:29:31.358775 4798 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8d9ff47d-d1bd-4be0-9c12-ea3b51d075d5-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 03 00:29:31 crc kubenswrapper[4798]: I0203 00:29:31.873922 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-r4t9g"] Feb 03 00:29:32 crc kubenswrapper[4798]: I0203 00:29:32.678901 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-hfxwl"] Feb 03 00:29:32 crc kubenswrapper[4798]: I0203 00:29:32.681391 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-hfxwl" Feb 03 00:29:32 crc kubenswrapper[4798]: I0203 00:29:32.685522 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-hfxwl"] Feb 03 00:29:32 crc kubenswrapper[4798]: I0203 00:29:32.782254 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmjnf\" (UniqueName: \"kubernetes.io/projected/3cdf33c8-dbf0-4da6-a5a3-3a7916d3f4c0-kube-api-access-wmjnf\") pod \"infrawatch-operators-hfxwl\" (UID: \"3cdf33c8-dbf0-4da6-a5a3-3a7916d3f4c0\") " pod="service-telemetry/infrawatch-operators-hfxwl" Feb 03 00:29:32 crc kubenswrapper[4798]: I0203 00:29:32.883263 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmjnf\" (UniqueName: \"kubernetes.io/projected/3cdf33c8-dbf0-4da6-a5a3-3a7916d3f4c0-kube-api-access-wmjnf\") pod \"infrawatch-operators-hfxwl\" (UID: \"3cdf33c8-dbf0-4da6-a5a3-3a7916d3f4c0\") " pod="service-telemetry/infrawatch-operators-hfxwl" Feb 03 00:29:32 crc kubenswrapper[4798]: I0203 00:29:32.902692 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmjnf\" (UniqueName: \"kubernetes.io/projected/3cdf33c8-dbf0-4da6-a5a3-3a7916d3f4c0-kube-api-access-wmjnf\") pod \"infrawatch-operators-hfxwl\" (UID: \"3cdf33c8-dbf0-4da6-a5a3-3a7916d3f4c0\") " pod="service-telemetry/infrawatch-operators-hfxwl" Feb 03 00:29:33 crc kubenswrapper[4798]: I0203 00:29:33.006745 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-hfxwl" Feb 03 00:29:33 crc kubenswrapper[4798]: I0203 00:29:33.504857 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-hfxwl"] Feb 03 00:29:33 crc kubenswrapper[4798]: I0203 00:29:33.877050 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-hfxwl" event={"ID":"3cdf33c8-dbf0-4da6-a5a3-3a7916d3f4c0","Type":"ContainerStarted","Data":"58d842b57ae534aebce54f0b05ebb7244ba8509769c6e65c643597f3281fc8fb"} Feb 03 00:29:45 crc kubenswrapper[4798]: I0203 00:29:45.970698 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-hfxwl" event={"ID":"3cdf33c8-dbf0-4da6-a5a3-3a7916d3f4c0","Type":"ContainerStarted","Data":"d135ce9153f2b51e22be16f125165a467521ca272fc5d7d74ea784ca99c48933"} Feb 03 00:29:46 crc kubenswrapper[4798]: I0203 00:29:45.972648 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-r4t9g" event={"ID":"c11c128d-d808-479a-8e28-5dc21617377a","Type":"ContainerStarted","Data":"afae6324d1cf2e0d9a7cf4539d3d2cef30db07a580060fd955069c434b375efc"} Feb 03 00:29:46 crc kubenswrapper[4798]: I0203 00:29:45.972744 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-r4t9g" podUID="c11c128d-d808-479a-8e28-5dc21617377a" containerName="registry-server" containerID="cri-o://afae6324d1cf2e0d9a7cf4539d3d2cef30db07a580060fd955069c434b375efc" gracePeriod=2 Feb 03 00:29:46 crc kubenswrapper[4798]: I0203 00:29:46.008184 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-hfxwl" podStartSLOduration=2.0938045 podStartE2EDuration="14.00815741s" podCreationTimestamp="2026-02-03 00:29:32 +0000 UTC" firstStartedPulling="2026-02-03 00:29:33.537336705 +0000 UTC m=+865.303326706" lastFinishedPulling="2026-02-03 00:29:45.451689605 +0000 UTC m=+877.217679616" observedRunningTime="2026-02-03 00:29:45.987345902 +0000 UTC m=+877.753335953" watchObservedRunningTime="2026-02-03 00:29:46.00815741 +0000 UTC m=+877.774147431" Feb 03 00:29:46 crc kubenswrapper[4798]: I0203 00:29:46.442103 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-r4t9g" Feb 03 00:29:46 crc kubenswrapper[4798]: I0203 00:29:46.576823 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4xq7\" (UniqueName: \"kubernetes.io/projected/c11c128d-d808-479a-8e28-5dc21617377a-kube-api-access-z4xq7\") pod \"c11c128d-d808-479a-8e28-5dc21617377a\" (UID: \"c11c128d-d808-479a-8e28-5dc21617377a\") " Feb 03 00:29:46 crc kubenswrapper[4798]: I0203 00:29:46.581439 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c11c128d-d808-479a-8e28-5dc21617377a-kube-api-access-z4xq7" (OuterVolumeSpecName: "kube-api-access-z4xq7") pod "c11c128d-d808-479a-8e28-5dc21617377a" (UID: "c11c128d-d808-479a-8e28-5dc21617377a"). InnerVolumeSpecName "kube-api-access-z4xq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:29:46 crc kubenswrapper[4798]: I0203 00:29:46.679012 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4xq7\" (UniqueName: \"kubernetes.io/projected/c11c128d-d808-479a-8e28-5dc21617377a-kube-api-access-z4xq7\") on node \"crc\" DevicePath \"\"" Feb 03 00:29:46 crc kubenswrapper[4798]: I0203 00:29:46.979529 4798 generic.go:334] "Generic (PLEG): container finished" podID="c11c128d-d808-479a-8e28-5dc21617377a" containerID="afae6324d1cf2e0d9a7cf4539d3d2cef30db07a580060fd955069c434b375efc" exitCode=0 Feb 03 00:29:46 crc kubenswrapper[4798]: I0203 00:29:46.979606 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-r4t9g" Feb 03 00:29:46 crc kubenswrapper[4798]: I0203 00:29:46.979605 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-r4t9g" event={"ID":"c11c128d-d808-479a-8e28-5dc21617377a","Type":"ContainerDied","Data":"afae6324d1cf2e0d9a7cf4539d3d2cef30db07a580060fd955069c434b375efc"} Feb 03 00:29:46 crc kubenswrapper[4798]: I0203 00:29:46.980057 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-r4t9g" event={"ID":"c11c128d-d808-479a-8e28-5dc21617377a","Type":"ContainerDied","Data":"6990e71b96130ba7c5601b773ec2272d82379b30e3d72f1a3676cc64dc8531e9"} Feb 03 00:29:46 crc kubenswrapper[4798]: I0203 00:29:46.980083 4798 scope.go:117] "RemoveContainer" containerID="afae6324d1cf2e0d9a7cf4539d3d2cef30db07a580060fd955069c434b375efc" Feb 03 00:29:47 crc kubenswrapper[4798]: I0203 00:29:47.003843 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-r4t9g"] Feb 03 00:29:47 crc kubenswrapper[4798]: I0203 00:29:47.005217 4798 scope.go:117] "RemoveContainer" containerID="afae6324d1cf2e0d9a7cf4539d3d2cef30db07a580060fd955069c434b375efc" Feb 03 00:29:47 crc kubenswrapper[4798]: E0203 00:29:47.005687 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afae6324d1cf2e0d9a7cf4539d3d2cef30db07a580060fd955069c434b375efc\": container with ID starting with afae6324d1cf2e0d9a7cf4539d3d2cef30db07a580060fd955069c434b375efc not found: ID does not exist" containerID="afae6324d1cf2e0d9a7cf4539d3d2cef30db07a580060fd955069c434b375efc" Feb 03 00:29:47 crc kubenswrapper[4798]: I0203 00:29:47.005718 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afae6324d1cf2e0d9a7cf4539d3d2cef30db07a580060fd955069c434b375efc"} err="failed to get container status \"afae6324d1cf2e0d9a7cf4539d3d2cef30db07a580060fd955069c434b375efc\": rpc error: code = NotFound desc = could not find container \"afae6324d1cf2e0d9a7cf4539d3d2cef30db07a580060fd955069c434b375efc\": container with ID starting with afae6324d1cf2e0d9a7cf4539d3d2cef30db07a580060fd955069c434b375efc not found: ID does not exist" Feb 03 00:29:47 crc kubenswrapper[4798]: I0203 00:29:47.010390 4798 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-r4t9g"] Feb 03 00:29:48 crc kubenswrapper[4798]: I0203 00:29:48.920917 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c11c128d-d808-479a-8e28-5dc21617377a" path="/var/lib/kubelet/pods/c11c128d-d808-479a-8e28-5dc21617377a/volumes" Feb 03 00:29:53 crc kubenswrapper[4798]: I0203 00:29:53.008886 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-hfxwl" Feb 03 00:29:53 crc kubenswrapper[4798]: I0203 00:29:53.010236 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/infrawatch-operators-hfxwl" Feb 03 00:29:53 crc kubenswrapper[4798]: I0203 00:29:53.041082 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-hfxwl" Feb 03 00:29:54 crc kubenswrapper[4798]: I0203 00:29:54.045099 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-hfxwl" Feb 03 00:30:00 crc kubenswrapper[4798]: I0203 00:30:00.504427 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bwpcrt"] Feb 03 00:30:00 crc kubenswrapper[4798]: E0203 00:30:00.505046 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c11c128d-d808-479a-8e28-5dc21617377a" containerName="registry-server" Feb 03 00:30:00 crc kubenswrapper[4798]: I0203 00:30:00.505064 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="c11c128d-d808-479a-8e28-5dc21617377a" containerName="registry-server" Feb 03 00:30:00 crc kubenswrapper[4798]: I0203 00:30:00.505197 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="c11c128d-d808-479a-8e28-5dc21617377a" containerName="registry-server" Feb 03 00:30:00 crc kubenswrapper[4798]: I0203 00:30:00.506156 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bwpcrt" Feb 03 00:30:00 crc kubenswrapper[4798]: I0203 00:30:00.525537 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bwpcrt"] Feb 03 00:30:00 crc kubenswrapper[4798]: I0203 00:30:00.576698 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df75fd1e-a4fa-4394-afaa-faa6f8172114-bundle\") pod \"27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bwpcrt\" (UID: \"df75fd1e-a4fa-4394-afaa-faa6f8172114\") " pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bwpcrt" Feb 03 00:30:00 crc kubenswrapper[4798]: I0203 00:30:00.577014 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df75fd1e-a4fa-4394-afaa-faa6f8172114-util\") pod \"27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bwpcrt\" (UID: \"df75fd1e-a4fa-4394-afaa-faa6f8172114\") " pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bwpcrt" Feb 03 00:30:00 crc kubenswrapper[4798]: I0203 00:30:00.577125 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wx8t\" (UniqueName: \"kubernetes.io/projected/df75fd1e-a4fa-4394-afaa-faa6f8172114-kube-api-access-5wx8t\") pod \"27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bwpcrt\" (UID: \"df75fd1e-a4fa-4394-afaa-faa6f8172114\") " pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bwpcrt" Feb 03 00:30:00 crc kubenswrapper[4798]: I0203 00:30:00.602260 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501310-t75rx"] Feb 03 00:30:00 crc kubenswrapper[4798]: I0203 00:30:00.603142 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501310-t75rx" Feb 03 00:30:00 crc kubenswrapper[4798]: I0203 00:30:00.605613 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 03 00:30:00 crc kubenswrapper[4798]: I0203 00:30:00.605995 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 03 00:30:00 crc kubenswrapper[4798]: I0203 00:30:00.610669 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501310-t75rx"] Feb 03 00:30:00 crc kubenswrapper[4798]: I0203 00:30:00.678215 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df75fd1e-a4fa-4394-afaa-faa6f8172114-bundle\") pod \"27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bwpcrt\" (UID: \"df75fd1e-a4fa-4394-afaa-faa6f8172114\") " pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bwpcrt" Feb 03 00:30:00 crc kubenswrapper[4798]: I0203 00:30:00.678276 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df75fd1e-a4fa-4394-afaa-faa6f8172114-util\") pod \"27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bwpcrt\" (UID: \"df75fd1e-a4fa-4394-afaa-faa6f8172114\") " pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bwpcrt" Feb 03 00:30:00 crc kubenswrapper[4798]: I0203 00:30:00.678308 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wx8t\" (UniqueName: \"kubernetes.io/projected/df75fd1e-a4fa-4394-afaa-faa6f8172114-kube-api-access-5wx8t\") pod \"27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bwpcrt\" (UID: \"df75fd1e-a4fa-4394-afaa-faa6f8172114\") " pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bwpcrt" Feb 03 00:30:00 crc kubenswrapper[4798]: I0203 00:30:00.678354 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec23fe7d-805f-4444-a972-9fc444b4c62d-secret-volume\") pod \"collect-profiles-29501310-t75rx\" (UID: \"ec23fe7d-805f-4444-a972-9fc444b4c62d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501310-t75rx" Feb 03 00:30:00 crc kubenswrapper[4798]: I0203 00:30:00.678398 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d6z2\" (UniqueName: \"kubernetes.io/projected/ec23fe7d-805f-4444-a972-9fc444b4c62d-kube-api-access-6d6z2\") pod \"collect-profiles-29501310-t75rx\" (UID: \"ec23fe7d-805f-4444-a972-9fc444b4c62d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501310-t75rx" Feb 03 00:30:00 crc kubenswrapper[4798]: I0203 00:30:00.678446 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec23fe7d-805f-4444-a972-9fc444b4c62d-config-volume\") pod \"collect-profiles-29501310-t75rx\" (UID: \"ec23fe7d-805f-4444-a972-9fc444b4c62d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501310-t75rx" Feb 03 00:30:00 crc kubenswrapper[4798]: I0203 00:30:00.678889 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df75fd1e-a4fa-4394-afaa-faa6f8172114-util\") pod \"27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bwpcrt\" (UID: \"df75fd1e-a4fa-4394-afaa-faa6f8172114\") " pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bwpcrt" Feb 03 00:30:00 crc kubenswrapper[4798]: I0203 00:30:00.678997 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df75fd1e-a4fa-4394-afaa-faa6f8172114-bundle\") pod \"27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bwpcrt\" (UID: \"df75fd1e-a4fa-4394-afaa-faa6f8172114\") " pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bwpcrt" Feb 03 00:30:00 crc kubenswrapper[4798]: I0203 00:30:00.703872 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wx8t\" (UniqueName: \"kubernetes.io/projected/df75fd1e-a4fa-4394-afaa-faa6f8172114-kube-api-access-5wx8t\") pod \"27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bwpcrt\" (UID: \"df75fd1e-a4fa-4394-afaa-faa6f8172114\") " pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bwpcrt" Feb 03 00:30:00 crc kubenswrapper[4798]: I0203 00:30:00.779864 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec23fe7d-805f-4444-a972-9fc444b4c62d-config-volume\") pod \"collect-profiles-29501310-t75rx\" (UID: \"ec23fe7d-805f-4444-a972-9fc444b4c62d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501310-t75rx" Feb 03 00:30:00 crc kubenswrapper[4798]: I0203 00:30:00.779971 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec23fe7d-805f-4444-a972-9fc444b4c62d-secret-volume\") pod \"collect-profiles-29501310-t75rx\" (UID: \"ec23fe7d-805f-4444-a972-9fc444b4c62d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501310-t75rx" Feb 03 00:30:00 crc kubenswrapper[4798]: I0203 00:30:00.780014 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d6z2\" (UniqueName: \"kubernetes.io/projected/ec23fe7d-805f-4444-a972-9fc444b4c62d-kube-api-access-6d6z2\") pod \"collect-profiles-29501310-t75rx\" (UID: \"ec23fe7d-805f-4444-a972-9fc444b4c62d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501310-t75rx" Feb 03 00:30:00 crc kubenswrapper[4798]: I0203 00:30:00.781408 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec23fe7d-805f-4444-a972-9fc444b4c62d-config-volume\") pod \"collect-profiles-29501310-t75rx\" (UID: \"ec23fe7d-805f-4444-a972-9fc444b4c62d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501310-t75rx" Feb 03 00:30:00 crc kubenswrapper[4798]: I0203 00:30:00.784279 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec23fe7d-805f-4444-a972-9fc444b4c62d-secret-volume\") pod \"collect-profiles-29501310-t75rx\" (UID: \"ec23fe7d-805f-4444-a972-9fc444b4c62d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501310-t75rx" Feb 03 00:30:00 crc kubenswrapper[4798]: I0203 00:30:00.797255 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d6z2\" (UniqueName: \"kubernetes.io/projected/ec23fe7d-805f-4444-a972-9fc444b4c62d-kube-api-access-6d6z2\") pod \"collect-profiles-29501310-t75rx\" (UID: \"ec23fe7d-805f-4444-a972-9fc444b4c62d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501310-t75rx" Feb 03 00:30:00 crc kubenswrapper[4798]: I0203 00:30:00.824410 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bwpcrt" Feb 03 00:30:00 crc kubenswrapper[4798]: I0203 00:30:00.920487 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501310-t75rx" Feb 03 00:30:01 crc kubenswrapper[4798]: I0203 00:30:01.136007 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501310-t75rx"] Feb 03 00:30:01 crc kubenswrapper[4798]: W0203 00:30:01.140767 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec23fe7d_805f_4444_a972_9fc444b4c62d.slice/crio-b294240aa6ae0072a6887223a90482a31ff4ead9e4985d6432bbe965098ee936 WatchSource:0}: Error finding container b294240aa6ae0072a6887223a90482a31ff4ead9e4985d6432bbe965098ee936: Status 404 returned error can't find the container with id b294240aa6ae0072a6887223a90482a31ff4ead9e4985d6432bbe965098ee936 Feb 03 00:30:01 crc kubenswrapper[4798]: I0203 00:30:01.231299 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bwpcrt"] Feb 03 00:30:01 crc kubenswrapper[4798]: W0203 00:30:01.238532 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf75fd1e_a4fa_4394_afaa_faa6f8172114.slice/crio-c43a43dcca94fcda068f490475cf33ae59ef9697a4538b1b36482e213408a717 WatchSource:0}: Error finding container c43a43dcca94fcda068f490475cf33ae59ef9697a4538b1b36482e213408a717: Status 404 returned error can't find the container with id c43a43dcca94fcda068f490475cf33ae59ef9697a4538b1b36482e213408a717 Feb 03 00:30:01 crc kubenswrapper[4798]: I0203 00:30:01.329572 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fk54w7"] Feb 03 00:30:01 crc kubenswrapper[4798]: I0203 00:30:01.330885 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fk54w7" Feb 03 00:30:01 crc kubenswrapper[4798]: I0203 00:30:01.334511 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 03 00:30:01 crc kubenswrapper[4798]: I0203 00:30:01.341637 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fk54w7"] Feb 03 00:30:01 crc kubenswrapper[4798]: I0203 00:30:01.493677 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d3093ff-b293-4d02-9db9-724830306c14-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fk54w7\" (UID: \"1d3093ff-b293-4d02-9db9-724830306c14\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fk54w7" Feb 03 00:30:01 crc kubenswrapper[4798]: I0203 00:30:01.494027 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d3093ff-b293-4d02-9db9-724830306c14-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fk54w7\" (UID: \"1d3093ff-b293-4d02-9db9-724830306c14\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fk54w7" Feb 03 00:30:01 crc kubenswrapper[4798]: I0203 00:30:01.494190 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9cqn\" (UniqueName: \"kubernetes.io/projected/1d3093ff-b293-4d02-9db9-724830306c14-kube-api-access-g9cqn\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fk54w7\" (UID: \"1d3093ff-b293-4d02-9db9-724830306c14\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fk54w7" Feb 03 00:30:01 crc kubenswrapper[4798]: I0203 00:30:01.595183 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d3093ff-b293-4d02-9db9-724830306c14-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fk54w7\" (UID: \"1d3093ff-b293-4d02-9db9-724830306c14\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fk54w7" Feb 03 00:30:01 crc kubenswrapper[4798]: I0203 00:30:01.595259 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d3093ff-b293-4d02-9db9-724830306c14-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fk54w7\" (UID: \"1d3093ff-b293-4d02-9db9-724830306c14\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fk54w7" Feb 03 00:30:01 crc kubenswrapper[4798]: I0203 00:30:01.595297 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9cqn\" (UniqueName: \"kubernetes.io/projected/1d3093ff-b293-4d02-9db9-724830306c14-kube-api-access-g9cqn\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fk54w7\" (UID: \"1d3093ff-b293-4d02-9db9-724830306c14\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fk54w7" Feb 03 00:30:01 crc kubenswrapper[4798]: I0203 00:30:01.595769 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d3093ff-b293-4d02-9db9-724830306c14-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fk54w7\" (UID: \"1d3093ff-b293-4d02-9db9-724830306c14\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fk54w7" Feb 03 00:30:01 crc kubenswrapper[4798]: I0203 00:30:01.595950 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d3093ff-b293-4d02-9db9-724830306c14-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fk54w7\" (UID: \"1d3093ff-b293-4d02-9db9-724830306c14\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fk54w7" Feb 03 00:30:01 crc kubenswrapper[4798]: I0203 00:30:01.614326 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9cqn\" (UniqueName: \"kubernetes.io/projected/1d3093ff-b293-4d02-9db9-724830306c14-kube-api-access-g9cqn\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fk54w7\" (UID: \"1d3093ff-b293-4d02-9db9-724830306c14\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fk54w7" Feb 03 00:30:01 crc kubenswrapper[4798]: I0203 00:30:01.644943 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fk54w7" Feb 03 00:30:02 crc kubenswrapper[4798]: I0203 00:30:02.042080 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fk54w7"] Feb 03 00:30:02 crc kubenswrapper[4798]: I0203 00:30:02.072863 4798 generic.go:334] "Generic (PLEG): container finished" podID="df75fd1e-a4fa-4394-afaa-faa6f8172114" containerID="1d68eb3285087bb11cbda395ad68cab8c56a83e2b44adfd2f83ebbb19533516a" exitCode=0 Feb 03 00:30:02 crc kubenswrapper[4798]: I0203 00:30:02.072959 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bwpcrt" event={"ID":"df75fd1e-a4fa-4394-afaa-faa6f8172114","Type":"ContainerDied","Data":"1d68eb3285087bb11cbda395ad68cab8c56a83e2b44adfd2f83ebbb19533516a"} Feb 03 00:30:02 crc kubenswrapper[4798]: I0203 00:30:02.073265 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bwpcrt" event={"ID":"df75fd1e-a4fa-4394-afaa-faa6f8172114","Type":"ContainerStarted","Data":"c43a43dcca94fcda068f490475cf33ae59ef9697a4538b1b36482e213408a717"} Feb 03 00:30:02 crc kubenswrapper[4798]: I0203 00:30:02.084992 4798 generic.go:334] "Generic (PLEG): container finished" podID="ec23fe7d-805f-4444-a972-9fc444b4c62d" containerID="c84537d86d2596ee90c7f32cf1abc376c6e54236fdfc6c84bbb06a05d88095e7" exitCode=0 Feb 03 00:30:02 crc kubenswrapper[4798]: I0203 00:30:02.085151 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501310-t75rx" event={"ID":"ec23fe7d-805f-4444-a972-9fc444b4c62d","Type":"ContainerDied","Data":"c84537d86d2596ee90c7f32cf1abc376c6e54236fdfc6c84bbb06a05d88095e7"} Feb 03 00:30:02 crc kubenswrapper[4798]: I0203 00:30:02.085213 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501310-t75rx" event={"ID":"ec23fe7d-805f-4444-a972-9fc444b4c62d","Type":"ContainerStarted","Data":"b294240aa6ae0072a6887223a90482a31ff4ead9e4985d6432bbe965098ee936"} Feb 03 00:30:02 crc kubenswrapper[4798]: I0203 00:30:02.088927 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fk54w7" event={"ID":"1d3093ff-b293-4d02-9db9-724830306c14","Type":"ContainerStarted","Data":"9bd22a49781d3520b7288ef76d31c766bb6e2e18617c762d877f580c4791243d"} Feb 03 00:30:02 crc kubenswrapper[4798]: I0203 00:30:02.341209 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebtcpqr"] Feb 03 00:30:02 crc kubenswrapper[4798]: I0203 00:30:02.342648 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebtcpqr" Feb 03 00:30:02 crc kubenswrapper[4798]: I0203 00:30:02.347223 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebtcpqr"] Feb 03 00:30:02 crc kubenswrapper[4798]: I0203 00:30:02.507311 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a8e85934-2a70-4f88-b8a5-4b5643809cde-util\") pod \"cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebtcpqr\" (UID: \"a8e85934-2a70-4f88-b8a5-4b5643809cde\") " pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebtcpqr" Feb 03 00:30:02 crc kubenswrapper[4798]: I0203 00:30:02.507416 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxs8w\" (UniqueName: \"kubernetes.io/projected/a8e85934-2a70-4f88-b8a5-4b5643809cde-kube-api-access-nxs8w\") pod \"cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebtcpqr\" (UID: \"a8e85934-2a70-4f88-b8a5-4b5643809cde\") " pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebtcpqr" Feb 03 00:30:02 crc kubenswrapper[4798]: I0203 00:30:02.507456 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a8e85934-2a70-4f88-b8a5-4b5643809cde-bundle\") pod \"cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebtcpqr\" (UID: \"a8e85934-2a70-4f88-b8a5-4b5643809cde\") " pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebtcpqr" Feb 03 00:30:02 crc kubenswrapper[4798]: I0203 00:30:02.608641 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxs8w\" (UniqueName: \"kubernetes.io/projected/a8e85934-2a70-4f88-b8a5-4b5643809cde-kube-api-access-nxs8w\") pod \"cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebtcpqr\" (UID: \"a8e85934-2a70-4f88-b8a5-4b5643809cde\") " pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebtcpqr" Feb 03 00:30:02 crc kubenswrapper[4798]: I0203 00:30:02.609043 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a8e85934-2a70-4f88-b8a5-4b5643809cde-bundle\") pod \"cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebtcpqr\" (UID: \"a8e85934-2a70-4f88-b8a5-4b5643809cde\") " pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebtcpqr" Feb 03 00:30:02 crc kubenswrapper[4798]: I0203 00:30:02.609109 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a8e85934-2a70-4f88-b8a5-4b5643809cde-util\") pod \"cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebtcpqr\" (UID: \"a8e85934-2a70-4f88-b8a5-4b5643809cde\") " pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebtcpqr" Feb 03 00:30:02 crc kubenswrapper[4798]: I0203 00:30:02.609468 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a8e85934-2a70-4f88-b8a5-4b5643809cde-bundle\") pod \"cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebtcpqr\" (UID: \"a8e85934-2a70-4f88-b8a5-4b5643809cde\") " pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebtcpqr" Feb 03 00:30:02 crc kubenswrapper[4798]: I0203 00:30:02.609590 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a8e85934-2a70-4f88-b8a5-4b5643809cde-util\") pod \"cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebtcpqr\" (UID: \"a8e85934-2a70-4f88-b8a5-4b5643809cde\") " pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebtcpqr" Feb 03 00:30:02 crc kubenswrapper[4798]: I0203 00:30:02.625936 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxs8w\" (UniqueName: \"kubernetes.io/projected/a8e85934-2a70-4f88-b8a5-4b5643809cde-kube-api-access-nxs8w\") pod \"cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebtcpqr\" (UID: \"a8e85934-2a70-4f88-b8a5-4b5643809cde\") " pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebtcpqr" Feb 03 00:30:02 crc kubenswrapper[4798]: I0203 00:30:02.666276 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebtcpqr" Feb 03 00:30:02 crc kubenswrapper[4798]: I0203 00:30:02.863256 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebtcpqr"] Feb 03 00:30:02 crc kubenswrapper[4798]: W0203 00:30:02.872863 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8e85934_2a70_4f88_b8a5_4b5643809cde.slice/crio-119f2d22a49da40a16467ac7712517328ed1b106431fb1d0df76380c73398866 WatchSource:0}: Error finding container 119f2d22a49da40a16467ac7712517328ed1b106431fb1d0df76380c73398866: Status 404 returned error can't find the container with id 119f2d22a49da40a16467ac7712517328ed1b106431fb1d0df76380c73398866 Feb 03 00:30:03 crc kubenswrapper[4798]: I0203 00:30:03.098337 4798 generic.go:334] "Generic (PLEG): container finished" podID="1d3093ff-b293-4d02-9db9-724830306c14" containerID="fd45b17cb006743580dc08db5a371e7ab89401b90fe7c7cb236b13429236f59a" exitCode=0 Feb 03 00:30:03 crc kubenswrapper[4798]: I0203 00:30:03.098433 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fk54w7" event={"ID":"1d3093ff-b293-4d02-9db9-724830306c14","Type":"ContainerDied","Data":"fd45b17cb006743580dc08db5a371e7ab89401b90fe7c7cb236b13429236f59a"} Feb 03 00:30:03 crc kubenswrapper[4798]: I0203 00:30:03.101274 4798 generic.go:334] "Generic (PLEG): container finished" podID="a8e85934-2a70-4f88-b8a5-4b5643809cde" containerID="7bb31e28f3aa435d6cacdd51189edc3dd8f9f85dde2a85a2592d18e3bff7745b" exitCode=0 Feb 03 00:30:03 crc kubenswrapper[4798]: I0203 00:30:03.101337 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebtcpqr" event={"ID":"a8e85934-2a70-4f88-b8a5-4b5643809cde","Type":"ContainerDied","Data":"7bb31e28f3aa435d6cacdd51189edc3dd8f9f85dde2a85a2592d18e3bff7745b"} Feb 03 00:30:03 crc kubenswrapper[4798]: I0203 00:30:03.101366 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebtcpqr" event={"ID":"a8e85934-2a70-4f88-b8a5-4b5643809cde","Type":"ContainerStarted","Data":"119f2d22a49da40a16467ac7712517328ed1b106431fb1d0df76380c73398866"} Feb 03 00:30:03 crc kubenswrapper[4798]: I0203 00:30:03.104099 4798 generic.go:334] "Generic (PLEG): container finished" podID="df75fd1e-a4fa-4394-afaa-faa6f8172114" containerID="dc9b245761485b008ba9e06bed463c957f7f2717fbfdb93bdbcd78c94ed6eb96" exitCode=0 Feb 03 00:30:03 crc kubenswrapper[4798]: I0203 00:30:03.104759 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bwpcrt" event={"ID":"df75fd1e-a4fa-4394-afaa-faa6f8172114","Type":"ContainerDied","Data":"dc9b245761485b008ba9e06bed463c957f7f2717fbfdb93bdbcd78c94ed6eb96"} Feb 03 00:30:03 crc kubenswrapper[4798]: I0203 00:30:03.434701 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501310-t75rx" Feb 03 00:30:03 crc kubenswrapper[4798]: I0203 00:30:03.522977 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6d6z2\" (UniqueName: \"kubernetes.io/projected/ec23fe7d-805f-4444-a972-9fc444b4c62d-kube-api-access-6d6z2\") pod \"ec23fe7d-805f-4444-a972-9fc444b4c62d\" (UID: \"ec23fe7d-805f-4444-a972-9fc444b4c62d\") " Feb 03 00:30:03 crc kubenswrapper[4798]: I0203 00:30:03.523124 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec23fe7d-805f-4444-a972-9fc444b4c62d-secret-volume\") pod \"ec23fe7d-805f-4444-a972-9fc444b4c62d\" (UID: \"ec23fe7d-805f-4444-a972-9fc444b4c62d\") " Feb 03 00:30:03 crc kubenswrapper[4798]: I0203 00:30:03.523168 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec23fe7d-805f-4444-a972-9fc444b4c62d-config-volume\") pod \"ec23fe7d-805f-4444-a972-9fc444b4c62d\" (UID: \"ec23fe7d-805f-4444-a972-9fc444b4c62d\") " Feb 03 00:30:03 crc kubenswrapper[4798]: I0203 00:30:03.540763 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec23fe7d-805f-4444-a972-9fc444b4c62d-config-volume" (OuterVolumeSpecName: "config-volume") pod "ec23fe7d-805f-4444-a972-9fc444b4c62d" (UID: "ec23fe7d-805f-4444-a972-9fc444b4c62d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:30:03 crc kubenswrapper[4798]: I0203 00:30:03.548153 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec23fe7d-805f-4444-a972-9fc444b4c62d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ec23fe7d-805f-4444-a972-9fc444b4c62d" (UID: "ec23fe7d-805f-4444-a972-9fc444b4c62d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:30:03 crc kubenswrapper[4798]: I0203 00:30:03.549002 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec23fe7d-805f-4444-a972-9fc444b4c62d-kube-api-access-6d6z2" (OuterVolumeSpecName: "kube-api-access-6d6z2") pod "ec23fe7d-805f-4444-a972-9fc444b4c62d" (UID: "ec23fe7d-805f-4444-a972-9fc444b4c62d"). InnerVolumeSpecName "kube-api-access-6d6z2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:30:03 crc kubenswrapper[4798]: I0203 00:30:03.625315 4798 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec23fe7d-805f-4444-a972-9fc444b4c62d-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 03 00:30:03 crc kubenswrapper[4798]: I0203 00:30:03.625397 4798 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec23fe7d-805f-4444-a972-9fc444b4c62d-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 00:30:03 crc kubenswrapper[4798]: I0203 00:30:03.625412 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6d6z2\" (UniqueName: \"kubernetes.io/projected/ec23fe7d-805f-4444-a972-9fc444b4c62d-kube-api-access-6d6z2\") on node \"crc\" DevicePath \"\"" Feb 03 00:30:04 crc kubenswrapper[4798]: I0203 00:30:04.113526 4798 generic.go:334] "Generic (PLEG): container finished" podID="a8e85934-2a70-4f88-b8a5-4b5643809cde" containerID="607b13c8b4c9cacc49a2c2c6fb5dd37ffb1f9f6e302f721587bdd1d224c42dab" exitCode=0 Feb 03 00:30:04 crc kubenswrapper[4798]: I0203 00:30:04.113628 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebtcpqr" event={"ID":"a8e85934-2a70-4f88-b8a5-4b5643809cde","Type":"ContainerDied","Data":"607b13c8b4c9cacc49a2c2c6fb5dd37ffb1f9f6e302f721587bdd1d224c42dab"} Feb 03 00:30:04 crc kubenswrapper[4798]: I0203 00:30:04.115922 4798 generic.go:334] "Generic (PLEG): container finished" podID="df75fd1e-a4fa-4394-afaa-faa6f8172114" containerID="c456fccd760581747c79718b9b89f26a182c3e3d363b9e030f8535216a99dcf7" exitCode=0 Feb 03 00:30:04 crc kubenswrapper[4798]: I0203 00:30:04.115957 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bwpcrt" event={"ID":"df75fd1e-a4fa-4394-afaa-faa6f8172114","Type":"ContainerDied","Data":"c456fccd760581747c79718b9b89f26a182c3e3d363b9e030f8535216a99dcf7"} Feb 03 00:30:04 crc kubenswrapper[4798]: I0203 00:30:04.119790 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501310-t75rx" event={"ID":"ec23fe7d-805f-4444-a972-9fc444b4c62d","Type":"ContainerDied","Data":"b294240aa6ae0072a6887223a90482a31ff4ead9e4985d6432bbe965098ee936"} Feb 03 00:30:04 crc kubenswrapper[4798]: I0203 00:30:04.119823 4798 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b294240aa6ae0072a6887223a90482a31ff4ead9e4985d6432bbe965098ee936" Feb 03 00:30:04 crc kubenswrapper[4798]: I0203 00:30:04.119902 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501310-t75rx" Feb 03 00:30:05 crc kubenswrapper[4798]: I0203 00:30:05.129475 4798 generic.go:334] "Generic (PLEG): container finished" podID="a8e85934-2a70-4f88-b8a5-4b5643809cde" containerID="ca54781958147d2e5f473cc2b7449be16675747324e0dce6e61005ca06b7c855" exitCode=0 Feb 03 00:30:05 crc kubenswrapper[4798]: I0203 00:30:05.129523 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebtcpqr" event={"ID":"a8e85934-2a70-4f88-b8a5-4b5643809cde","Type":"ContainerDied","Data":"ca54781958147d2e5f473cc2b7449be16675747324e0dce6e61005ca06b7c855"} Feb 03 00:30:05 crc kubenswrapper[4798]: I0203 00:30:05.412933 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bwpcrt" Feb 03 00:30:05 crc kubenswrapper[4798]: I0203 00:30:05.549686 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wx8t\" (UniqueName: \"kubernetes.io/projected/df75fd1e-a4fa-4394-afaa-faa6f8172114-kube-api-access-5wx8t\") pod \"df75fd1e-a4fa-4394-afaa-faa6f8172114\" (UID: \"df75fd1e-a4fa-4394-afaa-faa6f8172114\") " Feb 03 00:30:05 crc kubenswrapper[4798]: I0203 00:30:05.549815 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df75fd1e-a4fa-4394-afaa-faa6f8172114-bundle\") pod \"df75fd1e-a4fa-4394-afaa-faa6f8172114\" (UID: \"df75fd1e-a4fa-4394-afaa-faa6f8172114\") " Feb 03 00:30:05 crc kubenswrapper[4798]: I0203 00:30:05.549854 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df75fd1e-a4fa-4394-afaa-faa6f8172114-util\") pod \"df75fd1e-a4fa-4394-afaa-faa6f8172114\" (UID: \"df75fd1e-a4fa-4394-afaa-faa6f8172114\") " Feb 03 00:30:05 crc kubenswrapper[4798]: I0203 00:30:05.551052 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df75fd1e-a4fa-4394-afaa-faa6f8172114-bundle" (OuterVolumeSpecName: "bundle") pod "df75fd1e-a4fa-4394-afaa-faa6f8172114" (UID: "df75fd1e-a4fa-4394-afaa-faa6f8172114"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:30:05 crc kubenswrapper[4798]: I0203 00:30:05.558724 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df75fd1e-a4fa-4394-afaa-faa6f8172114-kube-api-access-5wx8t" (OuterVolumeSpecName: "kube-api-access-5wx8t") pod "df75fd1e-a4fa-4394-afaa-faa6f8172114" (UID: "df75fd1e-a4fa-4394-afaa-faa6f8172114"). InnerVolumeSpecName "kube-api-access-5wx8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:30:05 crc kubenswrapper[4798]: I0203 00:30:05.572581 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df75fd1e-a4fa-4394-afaa-faa6f8172114-util" (OuterVolumeSpecName: "util") pod "df75fd1e-a4fa-4394-afaa-faa6f8172114" (UID: "df75fd1e-a4fa-4394-afaa-faa6f8172114"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:30:05 crc kubenswrapper[4798]: I0203 00:30:05.651178 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wx8t\" (UniqueName: \"kubernetes.io/projected/df75fd1e-a4fa-4394-afaa-faa6f8172114-kube-api-access-5wx8t\") on node \"crc\" DevicePath \"\"" Feb 03 00:30:05 crc kubenswrapper[4798]: I0203 00:30:05.651228 4798 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df75fd1e-a4fa-4394-afaa-faa6f8172114-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 00:30:05 crc kubenswrapper[4798]: I0203 00:30:05.651241 4798 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df75fd1e-a4fa-4394-afaa-faa6f8172114-util\") on node \"crc\" DevicePath \"\"" Feb 03 00:30:06 crc kubenswrapper[4798]: I0203 00:30:06.143151 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bwpcrt" event={"ID":"df75fd1e-a4fa-4394-afaa-faa6f8172114","Type":"ContainerDied","Data":"c43a43dcca94fcda068f490475cf33ae59ef9697a4538b1b36482e213408a717"} Feb 03 00:30:06 crc kubenswrapper[4798]: I0203 00:30:06.143235 4798 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c43a43dcca94fcda068f490475cf33ae59ef9697a4538b1b36482e213408a717" Feb 03 00:30:06 crc kubenswrapper[4798]: I0203 00:30:06.144867 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bwpcrt" Feb 03 00:30:06 crc kubenswrapper[4798]: I0203 00:30:06.146053 4798 generic.go:334] "Generic (PLEG): container finished" podID="1d3093ff-b293-4d02-9db9-724830306c14" containerID="d16d872f0026f4032a5412318274cfca818dc5bd5a0e6e708aba08c15d6e7607" exitCode=0 Feb 03 00:30:06 crc kubenswrapper[4798]: I0203 00:30:06.146185 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fk54w7" event={"ID":"1d3093ff-b293-4d02-9db9-724830306c14","Type":"ContainerDied","Data":"d16d872f0026f4032a5412318274cfca818dc5bd5a0e6e708aba08c15d6e7607"} Feb 03 00:30:06 crc kubenswrapper[4798]: I0203 00:30:06.284471 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2r7nf"] Feb 03 00:30:06 crc kubenswrapper[4798]: E0203 00:30:06.285291 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df75fd1e-a4fa-4394-afaa-faa6f8172114" containerName="util" Feb 03 00:30:06 crc kubenswrapper[4798]: I0203 00:30:06.285306 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="df75fd1e-a4fa-4394-afaa-faa6f8172114" containerName="util" Feb 03 00:30:06 crc kubenswrapper[4798]: E0203 00:30:06.285327 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec23fe7d-805f-4444-a972-9fc444b4c62d" containerName="collect-profiles" Feb 03 00:30:06 crc kubenswrapper[4798]: I0203 00:30:06.285335 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec23fe7d-805f-4444-a972-9fc444b4c62d" containerName="collect-profiles" Feb 03 00:30:06 crc kubenswrapper[4798]: E0203 00:30:06.285348 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df75fd1e-a4fa-4394-afaa-faa6f8172114" containerName="pull" Feb 03 00:30:06 crc kubenswrapper[4798]: I0203 00:30:06.285355 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="df75fd1e-a4fa-4394-afaa-faa6f8172114" containerName="pull" Feb 03 00:30:06 crc kubenswrapper[4798]: E0203 00:30:06.285368 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df75fd1e-a4fa-4394-afaa-faa6f8172114" containerName="extract" Feb 03 00:30:06 crc kubenswrapper[4798]: I0203 00:30:06.285375 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="df75fd1e-a4fa-4394-afaa-faa6f8172114" containerName="extract" Feb 03 00:30:06 crc kubenswrapper[4798]: I0203 00:30:06.285793 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec23fe7d-805f-4444-a972-9fc444b4c62d" containerName="collect-profiles" Feb 03 00:30:06 crc kubenswrapper[4798]: I0203 00:30:06.285820 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="df75fd1e-a4fa-4394-afaa-faa6f8172114" containerName="extract" Feb 03 00:30:06 crc kubenswrapper[4798]: I0203 00:30:06.286956 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2r7nf" Feb 03 00:30:06 crc kubenswrapper[4798]: I0203 00:30:06.302249 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2r7nf"] Feb 03 00:30:06 crc kubenswrapper[4798]: I0203 00:30:06.360857 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f-catalog-content\") pod \"redhat-operators-2r7nf\" (UID: \"21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f\") " pod="openshift-marketplace/redhat-operators-2r7nf" Feb 03 00:30:06 crc kubenswrapper[4798]: I0203 00:30:06.360911 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr6rr\" (UniqueName: \"kubernetes.io/projected/21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f-kube-api-access-jr6rr\") pod \"redhat-operators-2r7nf\" (UID: \"21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f\") " pod="openshift-marketplace/redhat-operators-2r7nf" Feb 03 00:30:06 crc kubenswrapper[4798]: I0203 00:30:06.360932 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f-utilities\") pod \"redhat-operators-2r7nf\" (UID: \"21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f\") " pod="openshift-marketplace/redhat-operators-2r7nf" Feb 03 00:30:06 crc kubenswrapper[4798]: I0203 00:30:06.404778 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebtcpqr" Feb 03 00:30:06 crc kubenswrapper[4798]: I0203 00:30:06.462001 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a8e85934-2a70-4f88-b8a5-4b5643809cde-util\") pod \"a8e85934-2a70-4f88-b8a5-4b5643809cde\" (UID: \"a8e85934-2a70-4f88-b8a5-4b5643809cde\") " Feb 03 00:30:06 crc kubenswrapper[4798]: I0203 00:30:06.462051 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxs8w\" (UniqueName: \"kubernetes.io/projected/a8e85934-2a70-4f88-b8a5-4b5643809cde-kube-api-access-nxs8w\") pod \"a8e85934-2a70-4f88-b8a5-4b5643809cde\" (UID: \"a8e85934-2a70-4f88-b8a5-4b5643809cde\") " Feb 03 00:30:06 crc kubenswrapper[4798]: I0203 00:30:06.462187 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a8e85934-2a70-4f88-b8a5-4b5643809cde-bundle\") pod \"a8e85934-2a70-4f88-b8a5-4b5643809cde\" (UID: \"a8e85934-2a70-4f88-b8a5-4b5643809cde\") " Feb 03 00:30:06 crc kubenswrapper[4798]: I0203 00:30:06.462497 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f-catalog-content\") pod \"redhat-operators-2r7nf\" (UID: \"21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f\") " pod="openshift-marketplace/redhat-operators-2r7nf" Feb 03 00:30:06 crc kubenswrapper[4798]: I0203 00:30:06.462527 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr6rr\" (UniqueName: \"kubernetes.io/projected/21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f-kube-api-access-jr6rr\") pod \"redhat-operators-2r7nf\" (UID: \"21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f\") " pod="openshift-marketplace/redhat-operators-2r7nf" Feb 03 00:30:06 crc kubenswrapper[4798]: I0203 00:30:06.462555 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f-utilities\") pod \"redhat-operators-2r7nf\" (UID: \"21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f\") " pod="openshift-marketplace/redhat-operators-2r7nf" Feb 03 00:30:06 crc kubenswrapper[4798]: I0203 00:30:06.462620 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8e85934-2a70-4f88-b8a5-4b5643809cde-bundle" (OuterVolumeSpecName: "bundle") pod "a8e85934-2a70-4f88-b8a5-4b5643809cde" (UID: "a8e85934-2a70-4f88-b8a5-4b5643809cde"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:30:06 crc kubenswrapper[4798]: I0203 00:30:06.463497 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f-utilities\") pod \"redhat-operators-2r7nf\" (UID: \"21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f\") " pod="openshift-marketplace/redhat-operators-2r7nf" Feb 03 00:30:06 crc kubenswrapper[4798]: I0203 00:30:06.463573 4798 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a8e85934-2a70-4f88-b8a5-4b5643809cde-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 00:30:06 crc kubenswrapper[4798]: I0203 00:30:06.463604 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f-catalog-content\") pod \"redhat-operators-2r7nf\" (UID: \"21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f\") " pod="openshift-marketplace/redhat-operators-2r7nf" Feb 03 00:30:06 crc kubenswrapper[4798]: I0203 00:30:06.475544 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8e85934-2a70-4f88-b8a5-4b5643809cde-kube-api-access-nxs8w" (OuterVolumeSpecName: "kube-api-access-nxs8w") pod "a8e85934-2a70-4f88-b8a5-4b5643809cde" (UID: "a8e85934-2a70-4f88-b8a5-4b5643809cde"). InnerVolumeSpecName "kube-api-access-nxs8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:30:06 crc kubenswrapper[4798]: I0203 00:30:06.480459 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr6rr\" (UniqueName: \"kubernetes.io/projected/21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f-kube-api-access-jr6rr\") pod \"redhat-operators-2r7nf\" (UID: \"21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f\") " pod="openshift-marketplace/redhat-operators-2r7nf" Feb 03 00:30:06 crc kubenswrapper[4798]: I0203 00:30:06.482585 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8e85934-2a70-4f88-b8a5-4b5643809cde-util" (OuterVolumeSpecName: "util") pod "a8e85934-2a70-4f88-b8a5-4b5643809cde" (UID: "a8e85934-2a70-4f88-b8a5-4b5643809cde"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:30:06 crc kubenswrapper[4798]: I0203 00:30:06.564897 4798 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a8e85934-2a70-4f88-b8a5-4b5643809cde-util\") on node \"crc\" DevicePath \"\"" Feb 03 00:30:06 crc kubenswrapper[4798]: I0203 00:30:06.564933 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxs8w\" (UniqueName: \"kubernetes.io/projected/a8e85934-2a70-4f88-b8a5-4b5643809cde-kube-api-access-nxs8w\") on node \"crc\" DevicePath \"\"" Feb 03 00:30:06 crc kubenswrapper[4798]: I0203 00:30:06.608812 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2r7nf" Feb 03 00:30:06 crc kubenswrapper[4798]: I0203 00:30:06.810666 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2r7nf"] Feb 03 00:30:06 crc kubenswrapper[4798]: W0203 00:30:06.817869 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21efe8c6_cfa4_4cd5_bf5b_8d250a17b55f.slice/crio-c47c1b550995ac857785bba35b24f396605b19e628d854572271799962648d9d WatchSource:0}: Error finding container c47c1b550995ac857785bba35b24f396605b19e628d854572271799962648d9d: Status 404 returned error can't find the container with id c47c1b550995ac857785bba35b24f396605b19e628d854572271799962648d9d Feb 03 00:30:07 crc kubenswrapper[4798]: I0203 00:30:07.152767 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2r7nf" event={"ID":"21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f","Type":"ContainerStarted","Data":"c47c1b550995ac857785bba35b24f396605b19e628d854572271799962648d9d"} Feb 03 00:30:07 crc kubenswrapper[4798]: I0203 00:30:07.155887 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebtcpqr" event={"ID":"a8e85934-2a70-4f88-b8a5-4b5643809cde","Type":"ContainerDied","Data":"119f2d22a49da40a16467ac7712517328ed1b106431fb1d0df76380c73398866"} Feb 03 00:30:07 crc kubenswrapper[4798]: I0203 00:30:07.155917 4798 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="119f2d22a49da40a16467ac7712517328ed1b106431fb1d0df76380c73398866" Feb 03 00:30:07 crc kubenswrapper[4798]: I0203 00:30:07.155950 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebtcpqr" Feb 03 00:30:07 crc kubenswrapper[4798]: I0203 00:30:07.157928 4798 generic.go:334] "Generic (PLEG): container finished" podID="1d3093ff-b293-4d02-9db9-724830306c14" containerID="2e01ec3835224066fc26157e4b4a45f2afabf83428b3d285ab1fe35034d47e80" exitCode=0 Feb 03 00:30:07 crc kubenswrapper[4798]: I0203 00:30:07.157968 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fk54w7" event={"ID":"1d3093ff-b293-4d02-9db9-724830306c14","Type":"ContainerDied","Data":"2e01ec3835224066fc26157e4b4a45f2afabf83428b3d285ab1fe35034d47e80"} Feb 03 00:30:08 crc kubenswrapper[4798]: I0203 00:30:08.171613 4798 generic.go:334] "Generic (PLEG): container finished" podID="21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f" containerID="d541e4bde8806177d301faeede86824f32f92e1e8f1d58e6c530a1008b2c51a5" exitCode=0 Feb 03 00:30:08 crc kubenswrapper[4798]: I0203 00:30:08.171736 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2r7nf" event={"ID":"21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f","Type":"ContainerDied","Data":"d541e4bde8806177d301faeede86824f32f92e1e8f1d58e6c530a1008b2c51a5"} Feb 03 00:30:08 crc kubenswrapper[4798]: I0203 00:30:08.471395 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fk54w7" Feb 03 00:30:08 crc kubenswrapper[4798]: I0203 00:30:08.592282 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d3093ff-b293-4d02-9db9-724830306c14-bundle\") pod \"1d3093ff-b293-4d02-9db9-724830306c14\" (UID: \"1d3093ff-b293-4d02-9db9-724830306c14\") " Feb 03 00:30:08 crc kubenswrapper[4798]: I0203 00:30:08.592349 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9cqn\" (UniqueName: \"kubernetes.io/projected/1d3093ff-b293-4d02-9db9-724830306c14-kube-api-access-g9cqn\") pod \"1d3093ff-b293-4d02-9db9-724830306c14\" (UID: \"1d3093ff-b293-4d02-9db9-724830306c14\") " Feb 03 00:30:08 crc kubenswrapper[4798]: I0203 00:30:08.592403 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d3093ff-b293-4d02-9db9-724830306c14-util\") pod \"1d3093ff-b293-4d02-9db9-724830306c14\" (UID: \"1d3093ff-b293-4d02-9db9-724830306c14\") " Feb 03 00:30:08 crc kubenswrapper[4798]: I0203 00:30:08.592972 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d3093ff-b293-4d02-9db9-724830306c14-bundle" (OuterVolumeSpecName: "bundle") pod "1d3093ff-b293-4d02-9db9-724830306c14" (UID: "1d3093ff-b293-4d02-9db9-724830306c14"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:30:08 crc kubenswrapper[4798]: I0203 00:30:08.597986 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d3093ff-b293-4d02-9db9-724830306c14-kube-api-access-g9cqn" (OuterVolumeSpecName: "kube-api-access-g9cqn") pod "1d3093ff-b293-4d02-9db9-724830306c14" (UID: "1d3093ff-b293-4d02-9db9-724830306c14"). InnerVolumeSpecName "kube-api-access-g9cqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:30:08 crc kubenswrapper[4798]: I0203 00:30:08.604349 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d3093ff-b293-4d02-9db9-724830306c14-util" (OuterVolumeSpecName: "util") pod "1d3093ff-b293-4d02-9db9-724830306c14" (UID: "1d3093ff-b293-4d02-9db9-724830306c14"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:30:08 crc kubenswrapper[4798]: I0203 00:30:08.693979 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9cqn\" (UniqueName: \"kubernetes.io/projected/1d3093ff-b293-4d02-9db9-724830306c14-kube-api-access-g9cqn\") on node \"crc\" DevicePath \"\"" Feb 03 00:30:08 crc kubenswrapper[4798]: I0203 00:30:08.694033 4798 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d3093ff-b293-4d02-9db9-724830306c14-util\") on node \"crc\" DevicePath \"\"" Feb 03 00:30:08 crc kubenswrapper[4798]: I0203 00:30:08.694055 4798 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d3093ff-b293-4d02-9db9-724830306c14-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 00:30:09 crc kubenswrapper[4798]: I0203 00:30:09.181074 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fk54w7" event={"ID":"1d3093ff-b293-4d02-9db9-724830306c14","Type":"ContainerDied","Data":"9bd22a49781d3520b7288ef76d31c766bb6e2e18617c762d877f580c4791243d"} Feb 03 00:30:09 crc kubenswrapper[4798]: I0203 00:30:09.181117 4798 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bd22a49781d3520b7288ef76d31c766bb6e2e18617c762d877f580c4791243d" Feb 03 00:30:09 crc kubenswrapper[4798]: I0203 00:30:09.181184 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fk54w7" Feb 03 00:30:09 crc kubenswrapper[4798]: E0203 00:30:09.842191 4798 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d3093ff_b293_4d02_9db9_724830306c14.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d3093ff_b293_4d02_9db9_724830306c14.slice/crio-9bd22a49781d3520b7288ef76d31c766bb6e2e18617c762d877f580c4791243d\": RecentStats: unable to find data in memory cache]" Feb 03 00:30:15 crc kubenswrapper[4798]: I0203 00:30:15.199695 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-d4gf9"] Feb 03 00:30:15 crc kubenswrapper[4798]: E0203 00:30:15.200508 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8e85934-2a70-4f88-b8a5-4b5643809cde" containerName="util" Feb 03 00:30:15 crc kubenswrapper[4798]: I0203 00:30:15.200524 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8e85934-2a70-4f88-b8a5-4b5643809cde" containerName="util" Feb 03 00:30:15 crc kubenswrapper[4798]: E0203 00:30:15.200534 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8e85934-2a70-4f88-b8a5-4b5643809cde" containerName="extract" Feb 03 00:30:15 crc kubenswrapper[4798]: I0203 00:30:15.200543 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8e85934-2a70-4f88-b8a5-4b5643809cde" containerName="extract" Feb 03 00:30:15 crc kubenswrapper[4798]: E0203 00:30:15.200558 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d3093ff-b293-4d02-9db9-724830306c14" containerName="util" Feb 03 00:30:15 crc kubenswrapper[4798]: I0203 00:30:15.200565 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d3093ff-b293-4d02-9db9-724830306c14" containerName="util" Feb 03 00:30:15 crc kubenswrapper[4798]: E0203 00:30:15.200577 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d3093ff-b293-4d02-9db9-724830306c14" containerName="extract" Feb 03 00:30:15 crc kubenswrapper[4798]: I0203 00:30:15.200583 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d3093ff-b293-4d02-9db9-724830306c14" containerName="extract" Feb 03 00:30:15 crc kubenswrapper[4798]: E0203 00:30:15.200594 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8e85934-2a70-4f88-b8a5-4b5643809cde" containerName="pull" Feb 03 00:30:15 crc kubenswrapper[4798]: I0203 00:30:15.200602 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8e85934-2a70-4f88-b8a5-4b5643809cde" containerName="pull" Feb 03 00:30:15 crc kubenswrapper[4798]: E0203 00:30:15.200618 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d3093ff-b293-4d02-9db9-724830306c14" containerName="pull" Feb 03 00:30:15 crc kubenswrapper[4798]: I0203 00:30:15.200625 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d3093ff-b293-4d02-9db9-724830306c14" containerName="pull" Feb 03 00:30:15 crc kubenswrapper[4798]: I0203 00:30:15.200761 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8e85934-2a70-4f88-b8a5-4b5643809cde" containerName="extract" Feb 03 00:30:15 crc kubenswrapper[4798]: I0203 00:30:15.200779 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d3093ff-b293-4d02-9db9-724830306c14" containerName="extract" Feb 03 00:30:15 crc kubenswrapper[4798]: I0203 00:30:15.201267 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-d4gf9" Feb 03 00:30:15 crc kubenswrapper[4798]: I0203 00:30:15.203534 4798 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"interconnect-operator-dockercfg-bpvv4" Feb 03 00:30:15 crc kubenswrapper[4798]: I0203 00:30:15.211578 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-d4gf9"] Feb 03 00:30:15 crc kubenswrapper[4798]: I0203 00:30:15.279945 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wspj\" (UniqueName: \"kubernetes.io/projected/39d68054-552b-4cc9-940c-c3b890e11c5d-kube-api-access-6wspj\") pod \"interconnect-operator-5bb49f789d-d4gf9\" (UID: \"39d68054-552b-4cc9-940c-c3b890e11c5d\") " pod="service-telemetry/interconnect-operator-5bb49f789d-d4gf9" Feb 03 00:30:15 crc kubenswrapper[4798]: I0203 00:30:15.381584 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wspj\" (UniqueName: \"kubernetes.io/projected/39d68054-552b-4cc9-940c-c3b890e11c5d-kube-api-access-6wspj\") pod \"interconnect-operator-5bb49f789d-d4gf9\" (UID: \"39d68054-552b-4cc9-940c-c3b890e11c5d\") " pod="service-telemetry/interconnect-operator-5bb49f789d-d4gf9" Feb 03 00:30:15 crc kubenswrapper[4798]: I0203 00:30:15.403538 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wspj\" (UniqueName: \"kubernetes.io/projected/39d68054-552b-4cc9-940c-c3b890e11c5d-kube-api-access-6wspj\") pod \"interconnect-operator-5bb49f789d-d4gf9\" (UID: \"39d68054-552b-4cc9-940c-c3b890e11c5d\") " pod="service-telemetry/interconnect-operator-5bb49f789d-d4gf9" Feb 03 00:30:15 crc kubenswrapper[4798]: I0203 00:30:15.521632 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-d4gf9" Feb 03 00:30:15 crc kubenswrapper[4798]: I0203 00:30:15.954604 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-d4gf9"] Feb 03 00:30:16 crc kubenswrapper[4798]: I0203 00:30:16.227391 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-d4gf9" event={"ID":"39d68054-552b-4cc9-940c-c3b890e11c5d","Type":"ContainerStarted","Data":"a45e5e1741cd4af3a75bdb57bcf1b53e16f2191174b1ccab75984b72e7b83a3b"} Feb 03 00:30:16 crc kubenswrapper[4798]: I0203 00:30:16.229262 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2r7nf" event={"ID":"21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f","Type":"ContainerStarted","Data":"9bb3cd3e09c6754d16816c85cf72ae390d26c789aee869ab16ad27a8a720cfcb"} Feb 03 00:30:16 crc kubenswrapper[4798]: I0203 00:30:16.798323 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-bbbc889bc-rjnfk"] Feb 03 00:30:16 crc kubenswrapper[4798]: I0203 00:30:16.799238 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bbbc889bc-rjnfk" Feb 03 00:30:16 crc kubenswrapper[4798]: I0203 00:30:16.802303 4798 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-operator-dockercfg-kvpf5" Feb 03 00:30:16 crc kubenswrapper[4798]: I0203 00:30:16.810934 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bbbc889bc-rjnfk"] Feb 03 00:30:16 crc kubenswrapper[4798]: I0203 00:30:16.903798 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h78vx\" (UniqueName: \"kubernetes.io/projected/c46c738a-1240-42e9-9d41-adab078b0a95-kube-api-access-h78vx\") pod \"smart-gateway-operator-bbbc889bc-rjnfk\" (UID: \"c46c738a-1240-42e9-9d41-adab078b0a95\") " pod="service-telemetry/smart-gateway-operator-bbbc889bc-rjnfk" Feb 03 00:30:16 crc kubenswrapper[4798]: I0203 00:30:16.904103 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/c46c738a-1240-42e9-9d41-adab078b0a95-runner\") pod \"smart-gateway-operator-bbbc889bc-rjnfk\" (UID: \"c46c738a-1240-42e9-9d41-adab078b0a95\") " pod="service-telemetry/smart-gateway-operator-bbbc889bc-rjnfk" Feb 03 00:30:17 crc kubenswrapper[4798]: I0203 00:30:17.005000 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h78vx\" (UniqueName: \"kubernetes.io/projected/c46c738a-1240-42e9-9d41-adab078b0a95-kube-api-access-h78vx\") pod \"smart-gateway-operator-bbbc889bc-rjnfk\" (UID: \"c46c738a-1240-42e9-9d41-adab078b0a95\") " pod="service-telemetry/smart-gateway-operator-bbbc889bc-rjnfk" Feb 03 00:30:17 crc kubenswrapper[4798]: I0203 00:30:17.005373 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/c46c738a-1240-42e9-9d41-adab078b0a95-runner\") pod \"smart-gateway-operator-bbbc889bc-rjnfk\" (UID: \"c46c738a-1240-42e9-9d41-adab078b0a95\") " pod="service-telemetry/smart-gateway-operator-bbbc889bc-rjnfk" Feb 03 00:30:17 crc kubenswrapper[4798]: I0203 00:30:17.005868 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/c46c738a-1240-42e9-9d41-adab078b0a95-runner\") pod \"smart-gateway-operator-bbbc889bc-rjnfk\" (UID: \"c46c738a-1240-42e9-9d41-adab078b0a95\") " pod="service-telemetry/smart-gateway-operator-bbbc889bc-rjnfk" Feb 03 00:30:17 crc kubenswrapper[4798]: I0203 00:30:17.049465 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h78vx\" (UniqueName: \"kubernetes.io/projected/c46c738a-1240-42e9-9d41-adab078b0a95-kube-api-access-h78vx\") pod \"smart-gateway-operator-bbbc889bc-rjnfk\" (UID: \"c46c738a-1240-42e9-9d41-adab078b0a95\") " pod="service-telemetry/smart-gateway-operator-bbbc889bc-rjnfk" Feb 03 00:30:17 crc kubenswrapper[4798]: I0203 00:30:17.115298 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bbbc889bc-rjnfk" Feb 03 00:30:17 crc kubenswrapper[4798]: I0203 00:30:17.244934 4798 generic.go:334] "Generic (PLEG): container finished" podID="21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f" containerID="9bb3cd3e09c6754d16816c85cf72ae390d26c789aee869ab16ad27a8a720cfcb" exitCode=0 Feb 03 00:30:17 crc kubenswrapper[4798]: I0203 00:30:17.245049 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2r7nf" event={"ID":"21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f","Type":"ContainerDied","Data":"9bb3cd3e09c6754d16816c85cf72ae390d26c789aee869ab16ad27a8a720cfcb"} Feb 03 00:30:17 crc kubenswrapper[4798]: I0203 00:30:17.330279 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bbbc889bc-rjnfk"] Feb 03 00:30:17 crc kubenswrapper[4798]: W0203 00:30:17.339988 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc46c738a_1240_42e9_9d41_adab078b0a95.slice/crio-1b8e4e2b83247ebf29b3f3da2d85f344d50020d1d47422bf8067cefb5d29492f WatchSource:0}: Error finding container 1b8e4e2b83247ebf29b3f3da2d85f344d50020d1d47422bf8067cefb5d29492f: Status 404 returned error can't find the container with id 1b8e4e2b83247ebf29b3f3da2d85f344d50020d1d47422bf8067cefb5d29492f Feb 03 00:30:18 crc kubenswrapper[4798]: I0203 00:30:18.091550 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v4jb9"] Feb 03 00:30:18 crc kubenswrapper[4798]: I0203 00:30:18.093328 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v4jb9" Feb 03 00:30:18 crc kubenswrapper[4798]: I0203 00:30:18.113590 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v4jb9"] Feb 03 00:30:18 crc kubenswrapper[4798]: I0203 00:30:18.227771 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5r7j\" (UniqueName: \"kubernetes.io/projected/e47a07f6-e4db-4607-b3b6-23c86aab8954-kube-api-access-q5r7j\") pod \"community-operators-v4jb9\" (UID: \"e47a07f6-e4db-4607-b3b6-23c86aab8954\") " pod="openshift-marketplace/community-operators-v4jb9" Feb 03 00:30:18 crc kubenswrapper[4798]: I0203 00:30:18.227821 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e47a07f6-e4db-4607-b3b6-23c86aab8954-utilities\") pod \"community-operators-v4jb9\" (UID: \"e47a07f6-e4db-4607-b3b6-23c86aab8954\") " pod="openshift-marketplace/community-operators-v4jb9" Feb 03 00:30:18 crc kubenswrapper[4798]: I0203 00:30:18.227848 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e47a07f6-e4db-4607-b3b6-23c86aab8954-catalog-content\") pod \"community-operators-v4jb9\" (UID: \"e47a07f6-e4db-4607-b3b6-23c86aab8954\") " pod="openshift-marketplace/community-operators-v4jb9" Feb 03 00:30:18 crc kubenswrapper[4798]: I0203 00:30:18.256406 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bbbc889bc-rjnfk" event={"ID":"c46c738a-1240-42e9-9d41-adab078b0a95","Type":"ContainerStarted","Data":"1b8e4e2b83247ebf29b3f3da2d85f344d50020d1d47422bf8067cefb5d29492f"} Feb 03 00:30:18 crc kubenswrapper[4798]: I0203 00:30:18.329069 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5r7j\" (UniqueName: \"kubernetes.io/projected/e47a07f6-e4db-4607-b3b6-23c86aab8954-kube-api-access-q5r7j\") pod \"community-operators-v4jb9\" (UID: \"e47a07f6-e4db-4607-b3b6-23c86aab8954\") " pod="openshift-marketplace/community-operators-v4jb9" Feb 03 00:30:18 crc kubenswrapper[4798]: I0203 00:30:18.329126 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e47a07f6-e4db-4607-b3b6-23c86aab8954-utilities\") pod \"community-operators-v4jb9\" (UID: \"e47a07f6-e4db-4607-b3b6-23c86aab8954\") " pod="openshift-marketplace/community-operators-v4jb9" Feb 03 00:30:18 crc kubenswrapper[4798]: I0203 00:30:18.329157 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e47a07f6-e4db-4607-b3b6-23c86aab8954-catalog-content\") pod \"community-operators-v4jb9\" (UID: \"e47a07f6-e4db-4607-b3b6-23c86aab8954\") " pod="openshift-marketplace/community-operators-v4jb9" Feb 03 00:30:18 crc kubenswrapper[4798]: I0203 00:30:18.329700 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e47a07f6-e4db-4607-b3b6-23c86aab8954-catalog-content\") pod \"community-operators-v4jb9\" (UID: \"e47a07f6-e4db-4607-b3b6-23c86aab8954\") " pod="openshift-marketplace/community-operators-v4jb9" Feb 03 00:30:18 crc kubenswrapper[4798]: I0203 00:30:18.329805 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e47a07f6-e4db-4607-b3b6-23c86aab8954-utilities\") pod \"community-operators-v4jb9\" (UID: \"e47a07f6-e4db-4607-b3b6-23c86aab8954\") " pod="openshift-marketplace/community-operators-v4jb9" Feb 03 00:30:18 crc kubenswrapper[4798]: I0203 00:30:18.369157 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5r7j\" (UniqueName: \"kubernetes.io/projected/e47a07f6-e4db-4607-b3b6-23c86aab8954-kube-api-access-q5r7j\") pod \"community-operators-v4jb9\" (UID: \"e47a07f6-e4db-4607-b3b6-23c86aab8954\") " pod="openshift-marketplace/community-operators-v4jb9" Feb 03 00:30:18 crc kubenswrapper[4798]: I0203 00:30:18.370322 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-55b89ddfb9-dq9bs"] Feb 03 00:30:18 crc kubenswrapper[4798]: I0203 00:30:18.371027 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-55b89ddfb9-dq9bs" Feb 03 00:30:18 crc kubenswrapper[4798]: I0203 00:30:18.374789 4798 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-operator-dockercfg-mk79g" Feb 03 00:30:18 crc kubenswrapper[4798]: I0203 00:30:18.405608 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-55b89ddfb9-dq9bs"] Feb 03 00:30:18 crc kubenswrapper[4798]: I0203 00:30:18.429051 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v4jb9" Feb 03 00:30:18 crc kubenswrapper[4798]: I0203 00:30:18.430139 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/15f3e2a8-2d6f-47ec-aa00-9b67b995d64b-runner\") pod \"service-telemetry-operator-55b89ddfb9-dq9bs\" (UID: \"15f3e2a8-2d6f-47ec-aa00-9b67b995d64b\") " pod="service-telemetry/service-telemetry-operator-55b89ddfb9-dq9bs" Feb 03 00:30:18 crc kubenswrapper[4798]: I0203 00:30:18.430177 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6szh\" (UniqueName: \"kubernetes.io/projected/15f3e2a8-2d6f-47ec-aa00-9b67b995d64b-kube-api-access-r6szh\") pod \"service-telemetry-operator-55b89ddfb9-dq9bs\" (UID: \"15f3e2a8-2d6f-47ec-aa00-9b67b995d64b\") " pod="service-telemetry/service-telemetry-operator-55b89ddfb9-dq9bs" Feb 03 00:30:18 crc kubenswrapper[4798]: I0203 00:30:18.531993 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/15f3e2a8-2d6f-47ec-aa00-9b67b995d64b-runner\") pod \"service-telemetry-operator-55b89ddfb9-dq9bs\" (UID: \"15f3e2a8-2d6f-47ec-aa00-9b67b995d64b\") " pod="service-telemetry/service-telemetry-operator-55b89ddfb9-dq9bs" Feb 03 00:30:18 crc kubenswrapper[4798]: I0203 00:30:18.532075 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6szh\" (UniqueName: \"kubernetes.io/projected/15f3e2a8-2d6f-47ec-aa00-9b67b995d64b-kube-api-access-r6szh\") pod \"service-telemetry-operator-55b89ddfb9-dq9bs\" (UID: \"15f3e2a8-2d6f-47ec-aa00-9b67b995d64b\") " pod="service-telemetry/service-telemetry-operator-55b89ddfb9-dq9bs" Feb 03 00:30:18 crc kubenswrapper[4798]: I0203 00:30:18.532511 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/15f3e2a8-2d6f-47ec-aa00-9b67b995d64b-runner\") pod \"service-telemetry-operator-55b89ddfb9-dq9bs\" (UID: \"15f3e2a8-2d6f-47ec-aa00-9b67b995d64b\") " pod="service-telemetry/service-telemetry-operator-55b89ddfb9-dq9bs" Feb 03 00:30:18 crc kubenswrapper[4798]: I0203 00:30:18.565317 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6szh\" (UniqueName: \"kubernetes.io/projected/15f3e2a8-2d6f-47ec-aa00-9b67b995d64b-kube-api-access-r6szh\") pod \"service-telemetry-operator-55b89ddfb9-dq9bs\" (UID: \"15f3e2a8-2d6f-47ec-aa00-9b67b995d64b\") " pod="service-telemetry/service-telemetry-operator-55b89ddfb9-dq9bs" Feb 03 00:30:18 crc kubenswrapper[4798]: I0203 00:30:18.710985 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-55b89ddfb9-dq9bs" Feb 03 00:30:18 crc kubenswrapper[4798]: I0203 00:30:18.714019 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v4jb9"] Feb 03 00:30:18 crc kubenswrapper[4798]: W0203 00:30:18.739020 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode47a07f6_e4db_4607_b3b6_23c86aab8954.slice/crio-137d46b46efbcbd5ce51fa83bf8cec69a80336a92992fc81ed1e02b47678d6ad WatchSource:0}: Error finding container 137d46b46efbcbd5ce51fa83bf8cec69a80336a92992fc81ed1e02b47678d6ad: Status 404 returned error can't find the container with id 137d46b46efbcbd5ce51fa83bf8cec69a80336a92992fc81ed1e02b47678d6ad Feb 03 00:30:19 crc kubenswrapper[4798]: I0203 00:30:19.048132 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-55b89ddfb9-dq9bs"] Feb 03 00:30:19 crc kubenswrapper[4798]: W0203 00:30:19.139396 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15f3e2a8_2d6f_47ec_aa00_9b67b995d64b.slice/crio-25484ec8a60574f7eae3b37908ed360409ff0b95e7f9759350593024478d3d0d WatchSource:0}: Error finding container 25484ec8a60574f7eae3b37908ed360409ff0b95e7f9759350593024478d3d0d: Status 404 returned error can't find the container with id 25484ec8a60574f7eae3b37908ed360409ff0b95e7f9759350593024478d3d0d Feb 03 00:30:19 crc kubenswrapper[4798]: I0203 00:30:19.265684 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2r7nf" event={"ID":"21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f","Type":"ContainerStarted","Data":"e81ac2b81f97dd0b80decbaa1f7f4c562486b7493c7c886cbcf5ed0b26d998b0"} Feb 03 00:30:19 crc kubenswrapper[4798]: I0203 00:30:19.271397 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-55b89ddfb9-dq9bs" event={"ID":"15f3e2a8-2d6f-47ec-aa00-9b67b995d64b","Type":"ContainerStarted","Data":"25484ec8a60574f7eae3b37908ed360409ff0b95e7f9759350593024478d3d0d"} Feb 03 00:30:19 crc kubenswrapper[4798]: I0203 00:30:19.273799 4798 generic.go:334] "Generic (PLEG): container finished" podID="e47a07f6-e4db-4607-b3b6-23c86aab8954" containerID="93046288fdb3b8fc7357476497ffbf5c804808bedf6712b64087e2525641b40b" exitCode=0 Feb 03 00:30:19 crc kubenswrapper[4798]: I0203 00:30:19.273889 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v4jb9" event={"ID":"e47a07f6-e4db-4607-b3b6-23c86aab8954","Type":"ContainerDied","Data":"93046288fdb3b8fc7357476497ffbf5c804808bedf6712b64087e2525641b40b"} Feb 03 00:30:19 crc kubenswrapper[4798]: I0203 00:30:19.273965 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v4jb9" event={"ID":"e47a07f6-e4db-4607-b3b6-23c86aab8954","Type":"ContainerStarted","Data":"137d46b46efbcbd5ce51fa83bf8cec69a80336a92992fc81ed1e02b47678d6ad"} Feb 03 00:30:19 crc kubenswrapper[4798]: I0203 00:30:19.292831 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2r7nf" podStartSLOduration=2.412731306 podStartE2EDuration="13.29280609s" podCreationTimestamp="2026-02-03 00:30:06 +0000 UTC" firstStartedPulling="2026-02-03 00:30:08.174171387 +0000 UTC m=+899.940161408" lastFinishedPulling="2026-02-03 00:30:19.054246171 +0000 UTC m=+910.820236192" observedRunningTime="2026-02-03 00:30:19.292417161 +0000 UTC m=+911.058407172" watchObservedRunningTime="2026-02-03 00:30:19.29280609 +0000 UTC m=+911.058796101" Feb 03 00:30:19 crc kubenswrapper[4798]: E0203 00:30:19.985822 4798 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d3093ff_b293_4d02_9db9_724830306c14.slice/crio-9bd22a49781d3520b7288ef76d31c766bb6e2e18617c762d877f580c4791243d\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d3093ff_b293_4d02_9db9_724830306c14.slice\": RecentStats: unable to find data in memory cache]" Feb 03 00:30:20 crc kubenswrapper[4798]: I0203 00:30:20.092520 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-crmld"] Feb 03 00:30:20 crc kubenswrapper[4798]: I0203 00:30:20.097033 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-crmld" Feb 03 00:30:20 crc kubenswrapper[4798]: I0203 00:30:20.122905 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-crmld"] Feb 03 00:30:20 crc kubenswrapper[4798]: I0203 00:30:20.172005 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4762s\" (UniqueName: \"kubernetes.io/projected/e40ff644-9cdc-4092-bb0e-bbf12f8675fd-kube-api-access-4762s\") pod \"certified-operators-crmld\" (UID: \"e40ff644-9cdc-4092-bb0e-bbf12f8675fd\") " pod="openshift-marketplace/certified-operators-crmld" Feb 03 00:30:20 crc kubenswrapper[4798]: I0203 00:30:20.173447 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e40ff644-9cdc-4092-bb0e-bbf12f8675fd-utilities\") pod \"certified-operators-crmld\" (UID: \"e40ff644-9cdc-4092-bb0e-bbf12f8675fd\") " pod="openshift-marketplace/certified-operators-crmld" Feb 03 00:30:20 crc kubenswrapper[4798]: I0203 00:30:20.173713 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e40ff644-9cdc-4092-bb0e-bbf12f8675fd-catalog-content\") pod \"certified-operators-crmld\" (UID: \"e40ff644-9cdc-4092-bb0e-bbf12f8675fd\") " pod="openshift-marketplace/certified-operators-crmld" Feb 03 00:30:20 crc kubenswrapper[4798]: I0203 00:30:20.277379 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e40ff644-9cdc-4092-bb0e-bbf12f8675fd-catalog-content\") pod \"certified-operators-crmld\" (UID: \"e40ff644-9cdc-4092-bb0e-bbf12f8675fd\") " pod="openshift-marketplace/certified-operators-crmld" Feb 03 00:30:20 crc kubenswrapper[4798]: I0203 00:30:20.277488 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4762s\" (UniqueName: \"kubernetes.io/projected/e40ff644-9cdc-4092-bb0e-bbf12f8675fd-kube-api-access-4762s\") pod \"certified-operators-crmld\" (UID: \"e40ff644-9cdc-4092-bb0e-bbf12f8675fd\") " pod="openshift-marketplace/certified-operators-crmld" Feb 03 00:30:20 crc kubenswrapper[4798]: I0203 00:30:20.277518 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e40ff644-9cdc-4092-bb0e-bbf12f8675fd-utilities\") pod \"certified-operators-crmld\" (UID: \"e40ff644-9cdc-4092-bb0e-bbf12f8675fd\") " pod="openshift-marketplace/certified-operators-crmld" Feb 03 00:30:20 crc kubenswrapper[4798]: I0203 00:30:20.278368 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e40ff644-9cdc-4092-bb0e-bbf12f8675fd-utilities\") pod \"certified-operators-crmld\" (UID: \"e40ff644-9cdc-4092-bb0e-bbf12f8675fd\") " pod="openshift-marketplace/certified-operators-crmld" Feb 03 00:30:20 crc kubenswrapper[4798]: I0203 00:30:20.278474 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e40ff644-9cdc-4092-bb0e-bbf12f8675fd-catalog-content\") pod \"certified-operators-crmld\" (UID: \"e40ff644-9cdc-4092-bb0e-bbf12f8675fd\") " pod="openshift-marketplace/certified-operators-crmld" Feb 03 00:30:20 crc kubenswrapper[4798]: I0203 00:30:20.302722 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4762s\" (UniqueName: \"kubernetes.io/projected/e40ff644-9cdc-4092-bb0e-bbf12f8675fd-kube-api-access-4762s\") pod \"certified-operators-crmld\" (UID: \"e40ff644-9cdc-4092-bb0e-bbf12f8675fd\") " pod="openshift-marketplace/certified-operators-crmld" Feb 03 00:30:20 crc kubenswrapper[4798]: I0203 00:30:20.434388 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-crmld" Feb 03 00:30:26 crc kubenswrapper[4798]: I0203 00:30:26.609345 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2r7nf" Feb 03 00:30:26 crc kubenswrapper[4798]: I0203 00:30:26.610181 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2r7nf" Feb 03 00:30:26 crc kubenswrapper[4798]: I0203 00:30:26.657279 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2r7nf" Feb 03 00:30:27 crc kubenswrapper[4798]: I0203 00:30:27.408788 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2r7nf" Feb 03 00:30:28 crc kubenswrapper[4798]: I0203 00:30:28.476368 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2r7nf"] Feb 03 00:30:28 crc kubenswrapper[4798]: I0203 00:30:28.904617 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-crmld"] Feb 03 00:30:30 crc kubenswrapper[4798]: E0203 00:30:30.154822 4798 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d3093ff_b293_4d02_9db9_724830306c14.slice/crio-9bd22a49781d3520b7288ef76d31c766bb6e2e18617c762d877f580c4791243d\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d3093ff_b293_4d02_9db9_724830306c14.slice\": RecentStats: unable to find data in memory cache]" Feb 03 00:30:30 crc kubenswrapper[4798]: I0203 00:30:30.361248 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2r7nf" podUID="21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f" containerName="registry-server" containerID="cri-o://e81ac2b81f97dd0b80decbaa1f7f4c562486b7493c7c886cbcf5ed0b26d998b0" gracePeriod=2 Feb 03 00:30:31 crc kubenswrapper[4798]: I0203 00:30:31.368346 4798 generic.go:334] "Generic (PLEG): container finished" podID="21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f" containerID="e81ac2b81f97dd0b80decbaa1f7f4c562486b7493c7c886cbcf5ed0b26d998b0" exitCode=0 Feb 03 00:30:31 crc kubenswrapper[4798]: I0203 00:30:31.368480 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2r7nf" event={"ID":"21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f","Type":"ContainerDied","Data":"e81ac2b81f97dd0b80decbaa1f7f4c562486b7493c7c886cbcf5ed0b26d998b0"} Feb 03 00:30:33 crc kubenswrapper[4798]: I0203 00:30:33.389382 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crmld" event={"ID":"e40ff644-9cdc-4092-bb0e-bbf12f8675fd","Type":"ContainerStarted","Data":"dbe2c77a512d8db128d99af9d5113224030540962b2e50bd84c6c1a9279b2430"} Feb 03 00:30:35 crc kubenswrapper[4798]: I0203 00:30:35.902177 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2r7nf" Feb 03 00:30:36 crc kubenswrapper[4798]: I0203 00:30:36.013932 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jr6rr\" (UniqueName: \"kubernetes.io/projected/21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f-kube-api-access-jr6rr\") pod \"21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f\" (UID: \"21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f\") " Feb 03 00:30:36 crc kubenswrapper[4798]: I0203 00:30:36.014077 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f-catalog-content\") pod \"21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f\" (UID: \"21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f\") " Feb 03 00:30:36 crc kubenswrapper[4798]: I0203 00:30:36.014160 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f-utilities\") pod \"21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f\" (UID: \"21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f\") " Feb 03 00:30:36 crc kubenswrapper[4798]: I0203 00:30:36.015290 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f-utilities" (OuterVolumeSpecName: "utilities") pod "21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f" (UID: "21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:30:36 crc kubenswrapper[4798]: I0203 00:30:36.021681 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f-kube-api-access-jr6rr" (OuterVolumeSpecName: "kube-api-access-jr6rr") pod "21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f" (UID: "21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f"). InnerVolumeSpecName "kube-api-access-jr6rr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:30:36 crc kubenswrapper[4798]: I0203 00:30:36.116526 4798 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 00:30:36 crc kubenswrapper[4798]: I0203 00:30:36.116574 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jr6rr\" (UniqueName: \"kubernetes.io/projected/21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f-kube-api-access-jr6rr\") on node \"crc\" DevicePath \"\"" Feb 03 00:30:36 crc kubenswrapper[4798]: I0203 00:30:36.162489 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f" (UID: "21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:30:36 crc kubenswrapper[4798]: I0203 00:30:36.218122 4798 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 00:30:36 crc kubenswrapper[4798]: I0203 00:30:36.411060 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2r7nf" event={"ID":"21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f","Type":"ContainerDied","Data":"c47c1b550995ac857785bba35b24f396605b19e628d854572271799962648d9d"} Feb 03 00:30:36 crc kubenswrapper[4798]: I0203 00:30:36.411111 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2r7nf" Feb 03 00:30:36 crc kubenswrapper[4798]: I0203 00:30:36.411130 4798 scope.go:117] "RemoveContainer" containerID="e81ac2b81f97dd0b80decbaa1f7f4c562486b7493c7c886cbcf5ed0b26d998b0" Feb 03 00:30:36 crc kubenswrapper[4798]: I0203 00:30:36.443527 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2r7nf"] Feb 03 00:30:36 crc kubenswrapper[4798]: I0203 00:30:36.449462 4798 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2r7nf"] Feb 03 00:30:36 crc kubenswrapper[4798]: I0203 00:30:36.917858 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f" path="/var/lib/kubelet/pods/21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f/volumes" Feb 03 00:30:40 crc kubenswrapper[4798]: I0203 00:30:40.029828 4798 scope.go:117] "RemoveContainer" containerID="9bb3cd3e09c6754d16816c85cf72ae390d26c789aee869ab16ad27a8a720cfcb" Feb 03 00:30:40 crc kubenswrapper[4798]: I0203 00:30:40.191469 4798 scope.go:117] "RemoveContainer" containerID="d541e4bde8806177d301faeede86824f32f92e1e8f1d58e6c530a1008b2c51a5" Feb 03 00:30:40 crc kubenswrapper[4798]: E0203 00:30:40.292998 4798 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d3093ff_b293_4d02_9db9_724830306c14.slice/crio-9bd22a49781d3520b7288ef76d31c766bb6e2e18617c762d877f580c4791243d\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d3093ff_b293_4d02_9db9_724830306c14.slice\": RecentStats: unable to find data in memory cache]" Feb 03 00:30:40 crc kubenswrapper[4798]: I0203 00:30:40.451803 4798 generic.go:334] "Generic (PLEG): container finished" podID="e40ff644-9cdc-4092-bb0e-bbf12f8675fd" containerID="ddf814742a7921ac2b89a2536eb91a3e910d117775bf4ba6e6828d09e2b37f4c" exitCode=0 Feb 03 00:30:40 crc kubenswrapper[4798]: I0203 00:30:40.451887 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crmld" event={"ID":"e40ff644-9cdc-4092-bb0e-bbf12f8675fd","Type":"ContainerDied","Data":"ddf814742a7921ac2b89a2536eb91a3e910d117775bf4ba6e6828d09e2b37f4c"} Feb 03 00:30:41 crc kubenswrapper[4798]: E0203 00:30:41.240638 4798 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/infrawatch/smart-gateway-operator:latest" Feb 03 00:30:41 crc kubenswrapper[4798]: E0203 00:30:41.241047 4798 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/infrawatch/smart-gateway-operator:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.annotations['olm.targetNamespaces'],},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:smart-gateway-operator,ValueFrom:nil,},EnvVar{Name:ANSIBLE_GATHERING,Value:explicit,ValueFrom:nil,},EnvVar{Name:ANSIBLE_VERBOSITY_SMARTGATEWAY_SMARTGATEWAY_INFRA_WATCH,Value:4,ValueFrom:nil,},EnvVar{Name:ANSIBLE_DEBUG_LOGS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CORE_SMARTGATEWAY_IMAGE,Value:quay.io/infrawatch/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BRIDGE_SMARTGATEWAY_IMAGE,Value:quay.io/infrawatch/sg-bridge:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OAUTH_PROXY_IMAGE,Value:quay.io/openshift/origin-oauth-proxy:latest,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:smart-gateway-operator.v5.0.1768085178,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:runner,ReadOnly:false,MountPath:/tmp/ansible-operator/runner,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h78vx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod smart-gateway-operator-bbbc889bc-rjnfk_service-telemetry(c46c738a-1240-42e9-9d41-adab078b0a95): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 00:30:41 crc kubenswrapper[4798]: E0203 00:30:41.242333 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/smart-gateway-operator-bbbc889bc-rjnfk" podUID="c46c738a-1240-42e9-9d41-adab078b0a95" Feb 03 00:30:41 crc kubenswrapper[4798]: I0203 00:30:41.462153 4798 generic.go:334] "Generic (PLEG): container finished" podID="e47a07f6-e4db-4607-b3b6-23c86aab8954" containerID="dfaa838ac2a98d17a5bf06f7d298a09543e5e06d4cd5445da756798691db126c" exitCode=0 Feb 03 00:30:41 crc kubenswrapper[4798]: I0203 00:30:41.462253 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v4jb9" event={"ID":"e47a07f6-e4db-4607-b3b6-23c86aab8954","Type":"ContainerDied","Data":"dfaa838ac2a98d17a5bf06f7d298a09543e5e06d4cd5445da756798691db126c"} Feb 03 00:30:41 crc kubenswrapper[4798]: I0203 00:30:41.464218 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-d4gf9" event={"ID":"39d68054-552b-4cc9-940c-c3b890e11c5d","Type":"ContainerStarted","Data":"a55c7949ca13824840a4468bb4f870667e30f7334edeb60c1bf8cb22ad2ab923"} Feb 03 00:30:41 crc kubenswrapper[4798]: E0203 00:30:41.466802 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/smart-gateway-operator:latest\\\"\"" pod="service-telemetry/smart-gateway-operator-bbbc889bc-rjnfk" podUID="c46c738a-1240-42e9-9d41-adab078b0a95" Feb 03 00:30:41 crc kubenswrapper[4798]: I0203 00:30:41.498136 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-5bb49f789d-d4gf9" podStartSLOduration=9.107382549 podStartE2EDuration="26.498115308s" podCreationTimestamp="2026-02-03 00:30:15 +0000 UTC" firstStartedPulling="2026-02-03 00:30:15.960093336 +0000 UTC m=+907.726083347" lastFinishedPulling="2026-02-03 00:30:33.350826095 +0000 UTC m=+925.116816106" observedRunningTime="2026-02-03 00:30:41.496207905 +0000 UTC m=+933.262197916" watchObservedRunningTime="2026-02-03 00:30:41.498115308 +0000 UTC m=+933.264105339" Feb 03 00:30:46 crc kubenswrapper[4798]: I0203 00:30:46.496458 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-55b89ddfb9-dq9bs" event={"ID":"15f3e2a8-2d6f-47ec-aa00-9b67b995d64b","Type":"ContainerStarted","Data":"83d0dc1588874f64bd17414e60b4295c3842e7079e40dfe95f66fa0b57be2cd8"} Feb 03 00:30:46 crc kubenswrapper[4798]: I0203 00:30:46.498093 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v4jb9" event={"ID":"e47a07f6-e4db-4607-b3b6-23c86aab8954","Type":"ContainerStarted","Data":"05386a15737315fefd7a21285bcac02d13e6f6ea87ed17ff6d5abef8fdb6bd55"} Feb 03 00:30:46 crc kubenswrapper[4798]: I0203 00:30:46.500121 4798 generic.go:334] "Generic (PLEG): container finished" podID="e40ff644-9cdc-4092-bb0e-bbf12f8675fd" containerID="f839c901e20462b1c65bb48ae4c4c74d361e7279baf2c6b69c0a0bda5c15c6ad" exitCode=0 Feb 03 00:30:46 crc kubenswrapper[4798]: I0203 00:30:46.500159 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crmld" event={"ID":"e40ff644-9cdc-4092-bb0e-bbf12f8675fd","Type":"ContainerDied","Data":"f839c901e20462b1c65bb48ae4c4c74d361e7279baf2c6b69c0a0bda5c15c6ad"} Feb 03 00:30:46 crc kubenswrapper[4798]: I0203 00:30:46.516958 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-55b89ddfb9-dq9bs" podStartSLOduration=1.945546859 podStartE2EDuration="28.516936282s" podCreationTimestamp="2026-02-03 00:30:18 +0000 UTC" firstStartedPulling="2026-02-03 00:30:19.144130385 +0000 UTC m=+910.910120396" lastFinishedPulling="2026-02-03 00:30:45.715519798 +0000 UTC m=+937.481509819" observedRunningTime="2026-02-03 00:30:46.514643929 +0000 UTC m=+938.280633940" watchObservedRunningTime="2026-02-03 00:30:46.516936282 +0000 UTC m=+938.282926303" Feb 03 00:30:46 crc kubenswrapper[4798]: I0203 00:30:46.534288 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v4jb9" podStartSLOduration=2.127673987 podStartE2EDuration="28.534265677s" podCreationTimestamp="2026-02-03 00:30:18 +0000 UTC" firstStartedPulling="2026-02-03 00:30:19.275357141 +0000 UTC m=+911.041347152" lastFinishedPulling="2026-02-03 00:30:45.681948831 +0000 UTC m=+937.447938842" observedRunningTime="2026-02-03 00:30:46.530349037 +0000 UTC m=+938.296339068" watchObservedRunningTime="2026-02-03 00:30:46.534265677 +0000 UTC m=+938.300255688" Feb 03 00:30:47 crc kubenswrapper[4798]: I0203 00:30:47.507803 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crmld" event={"ID":"e40ff644-9cdc-4092-bb0e-bbf12f8675fd","Type":"ContainerStarted","Data":"88907ac3f2130b32ff221f8c3d15a6c93f67e77309557446a8554f9d9f14d887"} Feb 03 00:30:47 crc kubenswrapper[4798]: I0203 00:30:47.547228 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-crmld" podStartSLOduration=22.042042331 podStartE2EDuration="27.547210541s" podCreationTimestamp="2026-02-03 00:30:20 +0000 UTC" firstStartedPulling="2026-02-03 00:30:41.468122204 +0000 UTC m=+933.234112215" lastFinishedPulling="2026-02-03 00:30:46.973290414 +0000 UTC m=+938.739280425" observedRunningTime="2026-02-03 00:30:47.543118768 +0000 UTC m=+939.309108779" watchObservedRunningTime="2026-02-03 00:30:47.547210541 +0000 UTC m=+939.313200552" Feb 03 00:30:48 crc kubenswrapper[4798]: I0203 00:30:48.430141 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v4jb9" Feb 03 00:30:48 crc kubenswrapper[4798]: I0203 00:30:48.430204 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-v4jb9" Feb 03 00:30:48 crc kubenswrapper[4798]: I0203 00:30:48.468285 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v4jb9" Feb 03 00:30:50 crc kubenswrapper[4798]: E0203 00:30:50.414680 4798 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d3093ff_b293_4d02_9db9_724830306c14.slice/crio-9bd22a49781d3520b7288ef76d31c766bb6e2e18617c762d877f580c4791243d\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d3093ff_b293_4d02_9db9_724830306c14.slice\": RecentStats: unable to find data in memory cache]" Feb 03 00:30:50 crc kubenswrapper[4798]: I0203 00:30:50.435834 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-crmld" Feb 03 00:30:50 crc kubenswrapper[4798]: I0203 00:30:50.435880 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-crmld" Feb 03 00:30:50 crc kubenswrapper[4798]: I0203 00:30:50.477544 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-crmld" Feb 03 00:30:58 crc kubenswrapper[4798]: I0203 00:30:58.491857 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v4jb9" Feb 03 00:30:58 crc kubenswrapper[4798]: I0203 00:30:58.574495 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bbbc889bc-rjnfk" event={"ID":"c46c738a-1240-42e9-9d41-adab078b0a95","Type":"ContainerStarted","Data":"0942e7a08ce6806e14cdbb919fa49d79ad92b39a0b0be7c0cc288e27edfaf9c2"} Feb 03 00:30:58 crc kubenswrapper[4798]: I0203 00:30:58.592380 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-bbbc889bc-rjnfk" podStartSLOduration=1.580807392 podStartE2EDuration="42.592362278s" podCreationTimestamp="2026-02-03 00:30:16 +0000 UTC" firstStartedPulling="2026-02-03 00:30:17.342174852 +0000 UTC m=+909.108164863" lastFinishedPulling="2026-02-03 00:30:58.353729738 +0000 UTC m=+950.119719749" observedRunningTime="2026-02-03 00:30:58.587885766 +0000 UTC m=+950.353875797" watchObservedRunningTime="2026-02-03 00:30:58.592362278 +0000 UTC m=+950.358352289" Feb 03 00:31:00 crc kubenswrapper[4798]: I0203 00:31:00.485136 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-crmld" Feb 03 00:31:00 crc kubenswrapper[4798]: E0203 00:31:00.555297 4798 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d3093ff_b293_4d02_9db9_724830306c14.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d3093ff_b293_4d02_9db9_724830306c14.slice/crio-9bd22a49781d3520b7288ef76d31c766bb6e2e18617c762d877f580c4791243d\": RecentStats: unable to find data in memory cache]" Feb 03 00:31:00 crc kubenswrapper[4798]: I0203 00:31:00.680016 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v4jb9"] Feb 03 00:31:00 crc kubenswrapper[4798]: I0203 00:31:00.680250 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-v4jb9" podUID="e47a07f6-e4db-4607-b3b6-23c86aab8954" containerName="registry-server" containerID="cri-o://05386a15737315fefd7a21285bcac02d13e6f6ea87ed17ff6d5abef8fdb6bd55" gracePeriod=2 Feb 03 00:31:01 crc kubenswrapper[4798]: I0203 00:31:01.028748 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v4jb9" Feb 03 00:31:01 crc kubenswrapper[4798]: I0203 00:31:01.077788 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e47a07f6-e4db-4607-b3b6-23c86aab8954-catalog-content\") pod \"e47a07f6-e4db-4607-b3b6-23c86aab8954\" (UID: \"e47a07f6-e4db-4607-b3b6-23c86aab8954\") " Feb 03 00:31:01 crc kubenswrapper[4798]: I0203 00:31:01.077956 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e47a07f6-e4db-4607-b3b6-23c86aab8954-utilities\") pod \"e47a07f6-e4db-4607-b3b6-23c86aab8954\" (UID: \"e47a07f6-e4db-4607-b3b6-23c86aab8954\") " Feb 03 00:31:01 crc kubenswrapper[4798]: I0203 00:31:01.078028 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5r7j\" (UniqueName: \"kubernetes.io/projected/e47a07f6-e4db-4607-b3b6-23c86aab8954-kube-api-access-q5r7j\") pod \"e47a07f6-e4db-4607-b3b6-23c86aab8954\" (UID: \"e47a07f6-e4db-4607-b3b6-23c86aab8954\") " Feb 03 00:31:01 crc kubenswrapper[4798]: I0203 00:31:01.078876 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e47a07f6-e4db-4607-b3b6-23c86aab8954-utilities" (OuterVolumeSpecName: "utilities") pod "e47a07f6-e4db-4607-b3b6-23c86aab8954" (UID: "e47a07f6-e4db-4607-b3b6-23c86aab8954"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:31:01 crc kubenswrapper[4798]: I0203 00:31:01.079255 4798 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e47a07f6-e4db-4607-b3b6-23c86aab8954-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 00:31:01 crc kubenswrapper[4798]: I0203 00:31:01.086306 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e47a07f6-e4db-4607-b3b6-23c86aab8954-kube-api-access-q5r7j" (OuterVolumeSpecName: "kube-api-access-q5r7j") pod "e47a07f6-e4db-4607-b3b6-23c86aab8954" (UID: "e47a07f6-e4db-4607-b3b6-23c86aab8954"). InnerVolumeSpecName "kube-api-access-q5r7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:31:01 crc kubenswrapper[4798]: I0203 00:31:01.138566 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e47a07f6-e4db-4607-b3b6-23c86aab8954-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e47a07f6-e4db-4607-b3b6-23c86aab8954" (UID: "e47a07f6-e4db-4607-b3b6-23c86aab8954"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:31:01 crc kubenswrapper[4798]: I0203 00:31:01.180102 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5r7j\" (UniqueName: \"kubernetes.io/projected/e47a07f6-e4db-4607-b3b6-23c86aab8954-kube-api-access-q5r7j\") on node \"crc\" DevicePath \"\"" Feb 03 00:31:01 crc kubenswrapper[4798]: I0203 00:31:01.180139 4798 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e47a07f6-e4db-4607-b3b6-23c86aab8954-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 00:31:01 crc kubenswrapper[4798]: I0203 00:31:01.604995 4798 generic.go:334] "Generic (PLEG): container finished" podID="e47a07f6-e4db-4607-b3b6-23c86aab8954" containerID="05386a15737315fefd7a21285bcac02d13e6f6ea87ed17ff6d5abef8fdb6bd55" exitCode=0 Feb 03 00:31:01 crc kubenswrapper[4798]: I0203 00:31:01.605042 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v4jb9" event={"ID":"e47a07f6-e4db-4607-b3b6-23c86aab8954","Type":"ContainerDied","Data":"05386a15737315fefd7a21285bcac02d13e6f6ea87ed17ff6d5abef8fdb6bd55"} Feb 03 00:31:01 crc kubenswrapper[4798]: I0203 00:31:01.605085 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v4jb9" Feb 03 00:31:01 crc kubenswrapper[4798]: I0203 00:31:01.605106 4798 scope.go:117] "RemoveContainer" containerID="05386a15737315fefd7a21285bcac02d13e6f6ea87ed17ff6d5abef8fdb6bd55" Feb 03 00:31:01 crc kubenswrapper[4798]: I0203 00:31:01.605090 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v4jb9" event={"ID":"e47a07f6-e4db-4607-b3b6-23c86aab8954","Type":"ContainerDied","Data":"137d46b46efbcbd5ce51fa83bf8cec69a80336a92992fc81ed1e02b47678d6ad"} Feb 03 00:31:01 crc kubenswrapper[4798]: I0203 00:31:01.628008 4798 scope.go:117] "RemoveContainer" containerID="dfaa838ac2a98d17a5bf06f7d298a09543e5e06d4cd5445da756798691db126c" Feb 03 00:31:01 crc kubenswrapper[4798]: I0203 00:31:01.631040 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v4jb9"] Feb 03 00:31:01 crc kubenswrapper[4798]: I0203 00:31:01.636287 4798 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-v4jb9"] Feb 03 00:31:01 crc kubenswrapper[4798]: I0203 00:31:01.642622 4798 scope.go:117] "RemoveContainer" containerID="93046288fdb3b8fc7357476497ffbf5c804808bedf6712b64087e2525641b40b" Feb 03 00:31:01 crc kubenswrapper[4798]: I0203 00:31:01.660417 4798 scope.go:117] "RemoveContainer" containerID="05386a15737315fefd7a21285bcac02d13e6f6ea87ed17ff6d5abef8fdb6bd55" Feb 03 00:31:01 crc kubenswrapper[4798]: E0203 00:31:01.660996 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05386a15737315fefd7a21285bcac02d13e6f6ea87ed17ff6d5abef8fdb6bd55\": container with ID starting with 05386a15737315fefd7a21285bcac02d13e6f6ea87ed17ff6d5abef8fdb6bd55 not found: ID does not exist" containerID="05386a15737315fefd7a21285bcac02d13e6f6ea87ed17ff6d5abef8fdb6bd55" Feb 03 00:31:01 crc kubenswrapper[4798]: I0203 00:31:01.661058 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05386a15737315fefd7a21285bcac02d13e6f6ea87ed17ff6d5abef8fdb6bd55"} err="failed to get container status \"05386a15737315fefd7a21285bcac02d13e6f6ea87ed17ff6d5abef8fdb6bd55\": rpc error: code = NotFound desc = could not find container \"05386a15737315fefd7a21285bcac02d13e6f6ea87ed17ff6d5abef8fdb6bd55\": container with ID starting with 05386a15737315fefd7a21285bcac02d13e6f6ea87ed17ff6d5abef8fdb6bd55 not found: ID does not exist" Feb 03 00:31:01 crc kubenswrapper[4798]: I0203 00:31:01.661085 4798 scope.go:117] "RemoveContainer" containerID="dfaa838ac2a98d17a5bf06f7d298a09543e5e06d4cd5445da756798691db126c" Feb 03 00:31:01 crc kubenswrapper[4798]: E0203 00:31:01.661559 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfaa838ac2a98d17a5bf06f7d298a09543e5e06d4cd5445da756798691db126c\": container with ID starting with dfaa838ac2a98d17a5bf06f7d298a09543e5e06d4cd5445da756798691db126c not found: ID does not exist" containerID="dfaa838ac2a98d17a5bf06f7d298a09543e5e06d4cd5445da756798691db126c" Feb 03 00:31:01 crc kubenswrapper[4798]: I0203 00:31:01.661600 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfaa838ac2a98d17a5bf06f7d298a09543e5e06d4cd5445da756798691db126c"} err="failed to get container status \"dfaa838ac2a98d17a5bf06f7d298a09543e5e06d4cd5445da756798691db126c\": rpc error: code = NotFound desc = could not find container \"dfaa838ac2a98d17a5bf06f7d298a09543e5e06d4cd5445da756798691db126c\": container with ID starting with dfaa838ac2a98d17a5bf06f7d298a09543e5e06d4cd5445da756798691db126c not found: ID does not exist" Feb 03 00:31:01 crc kubenswrapper[4798]: I0203 00:31:01.661627 4798 scope.go:117] "RemoveContainer" containerID="93046288fdb3b8fc7357476497ffbf5c804808bedf6712b64087e2525641b40b" Feb 03 00:31:01 crc kubenswrapper[4798]: E0203 00:31:01.661955 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93046288fdb3b8fc7357476497ffbf5c804808bedf6712b64087e2525641b40b\": container with ID starting with 93046288fdb3b8fc7357476497ffbf5c804808bedf6712b64087e2525641b40b not found: ID does not exist" containerID="93046288fdb3b8fc7357476497ffbf5c804808bedf6712b64087e2525641b40b" Feb 03 00:31:01 crc kubenswrapper[4798]: I0203 00:31:01.661979 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93046288fdb3b8fc7357476497ffbf5c804808bedf6712b64087e2525641b40b"} err="failed to get container status \"93046288fdb3b8fc7357476497ffbf5c804808bedf6712b64087e2525641b40b\": rpc error: code = NotFound desc = could not find container \"93046288fdb3b8fc7357476497ffbf5c804808bedf6712b64087e2525641b40b\": container with ID starting with 93046288fdb3b8fc7357476497ffbf5c804808bedf6712b64087e2525641b40b not found: ID does not exist" Feb 03 00:31:02 crc kubenswrapper[4798]: I0203 00:31:02.924737 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e47a07f6-e4db-4607-b3b6-23c86aab8954" path="/var/lib/kubelet/pods/e47a07f6-e4db-4607-b3b6-23c86aab8954/volumes" Feb 03 00:31:04 crc kubenswrapper[4798]: I0203 00:31:04.083199 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-crmld"] Feb 03 00:31:04 crc kubenswrapper[4798]: I0203 00:31:04.083615 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-crmld" podUID="e40ff644-9cdc-4092-bb0e-bbf12f8675fd" containerName="registry-server" containerID="cri-o://88907ac3f2130b32ff221f8c3d15a6c93f67e77309557446a8554f9d9f14d887" gracePeriod=2 Feb 03 00:31:04 crc kubenswrapper[4798]: I0203 00:31:04.629097 4798 generic.go:334] "Generic (PLEG): container finished" podID="e40ff644-9cdc-4092-bb0e-bbf12f8675fd" containerID="88907ac3f2130b32ff221f8c3d15a6c93f67e77309557446a8554f9d9f14d887" exitCode=0 Feb 03 00:31:04 crc kubenswrapper[4798]: I0203 00:31:04.629136 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crmld" event={"ID":"e40ff644-9cdc-4092-bb0e-bbf12f8675fd","Type":"ContainerDied","Data":"88907ac3f2130b32ff221f8c3d15a6c93f67e77309557446a8554f9d9f14d887"} Feb 03 00:31:04 crc kubenswrapper[4798]: I0203 00:31:04.950009 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-crmld" Feb 03 00:31:05 crc kubenswrapper[4798]: I0203 00:31:05.032504 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e40ff644-9cdc-4092-bb0e-bbf12f8675fd-utilities\") pod \"e40ff644-9cdc-4092-bb0e-bbf12f8675fd\" (UID: \"e40ff644-9cdc-4092-bb0e-bbf12f8675fd\") " Feb 03 00:31:05 crc kubenswrapper[4798]: I0203 00:31:05.032624 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e40ff644-9cdc-4092-bb0e-bbf12f8675fd-catalog-content\") pod \"e40ff644-9cdc-4092-bb0e-bbf12f8675fd\" (UID: \"e40ff644-9cdc-4092-bb0e-bbf12f8675fd\") " Feb 03 00:31:05 crc kubenswrapper[4798]: I0203 00:31:05.032773 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4762s\" (UniqueName: \"kubernetes.io/projected/e40ff644-9cdc-4092-bb0e-bbf12f8675fd-kube-api-access-4762s\") pod \"e40ff644-9cdc-4092-bb0e-bbf12f8675fd\" (UID: \"e40ff644-9cdc-4092-bb0e-bbf12f8675fd\") " Feb 03 00:31:05 crc kubenswrapper[4798]: I0203 00:31:05.035215 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e40ff644-9cdc-4092-bb0e-bbf12f8675fd-utilities" (OuterVolumeSpecName: "utilities") pod "e40ff644-9cdc-4092-bb0e-bbf12f8675fd" (UID: "e40ff644-9cdc-4092-bb0e-bbf12f8675fd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:31:05 crc kubenswrapper[4798]: I0203 00:31:05.041209 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e40ff644-9cdc-4092-bb0e-bbf12f8675fd-kube-api-access-4762s" (OuterVolumeSpecName: "kube-api-access-4762s") pod "e40ff644-9cdc-4092-bb0e-bbf12f8675fd" (UID: "e40ff644-9cdc-4092-bb0e-bbf12f8675fd"). InnerVolumeSpecName "kube-api-access-4762s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:31:05 crc kubenswrapper[4798]: I0203 00:31:05.099394 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e40ff644-9cdc-4092-bb0e-bbf12f8675fd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e40ff644-9cdc-4092-bb0e-bbf12f8675fd" (UID: "e40ff644-9cdc-4092-bb0e-bbf12f8675fd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:31:05 crc kubenswrapper[4798]: I0203 00:31:05.134773 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4762s\" (UniqueName: \"kubernetes.io/projected/e40ff644-9cdc-4092-bb0e-bbf12f8675fd-kube-api-access-4762s\") on node \"crc\" DevicePath \"\"" Feb 03 00:31:05 crc kubenswrapper[4798]: I0203 00:31:05.134828 4798 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e40ff644-9cdc-4092-bb0e-bbf12f8675fd-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 00:31:05 crc kubenswrapper[4798]: I0203 00:31:05.134847 4798 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e40ff644-9cdc-4092-bb0e-bbf12f8675fd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 00:31:05 crc kubenswrapper[4798]: I0203 00:31:05.638991 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crmld" event={"ID":"e40ff644-9cdc-4092-bb0e-bbf12f8675fd","Type":"ContainerDied","Data":"dbe2c77a512d8db128d99af9d5113224030540962b2e50bd84c6c1a9279b2430"} Feb 03 00:31:05 crc kubenswrapper[4798]: I0203 00:31:05.639043 4798 scope.go:117] "RemoveContainer" containerID="88907ac3f2130b32ff221f8c3d15a6c93f67e77309557446a8554f9d9f14d887" Feb 03 00:31:05 crc kubenswrapper[4798]: I0203 00:31:05.639173 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-crmld" Feb 03 00:31:05 crc kubenswrapper[4798]: I0203 00:31:05.661889 4798 scope.go:117] "RemoveContainer" containerID="f839c901e20462b1c65bb48ae4c4c74d361e7279baf2c6b69c0a0bda5c15c6ad" Feb 03 00:31:05 crc kubenswrapper[4798]: I0203 00:31:05.669214 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-crmld"] Feb 03 00:31:05 crc kubenswrapper[4798]: I0203 00:31:05.675776 4798 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-crmld"] Feb 03 00:31:05 crc kubenswrapper[4798]: I0203 00:31:05.699785 4798 scope.go:117] "RemoveContainer" containerID="ddf814742a7921ac2b89a2536eb91a3e910d117775bf4ba6e6828d09e2b37f4c" Feb 03 00:31:06 crc kubenswrapper[4798]: I0203 00:31:06.916432 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e40ff644-9cdc-4092-bb0e-bbf12f8675fd" path="/var/lib/kubelet/pods/e40ff644-9cdc-4092-bb0e-bbf12f8675fd/volumes" Feb 03 00:31:11 crc kubenswrapper[4798]: I0203 00:31:11.375712 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-gm9dt"] Feb 03 00:31:11 crc kubenswrapper[4798]: E0203 00:31:11.376526 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e40ff644-9cdc-4092-bb0e-bbf12f8675fd" containerName="registry-server" Feb 03 00:31:11 crc kubenswrapper[4798]: I0203 00:31:11.376543 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="e40ff644-9cdc-4092-bb0e-bbf12f8675fd" containerName="registry-server" Feb 03 00:31:11 crc kubenswrapper[4798]: E0203 00:31:11.376554 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f" containerName="extract-utilities" Feb 03 00:31:11 crc kubenswrapper[4798]: I0203 00:31:11.376562 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f" containerName="extract-utilities" Feb 03 00:31:11 crc kubenswrapper[4798]: E0203 00:31:11.376577 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f" containerName="registry-server" Feb 03 00:31:11 crc kubenswrapper[4798]: I0203 00:31:11.376584 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f" containerName="registry-server" Feb 03 00:31:11 crc kubenswrapper[4798]: E0203 00:31:11.376599 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f" containerName="extract-content" Feb 03 00:31:11 crc kubenswrapper[4798]: I0203 00:31:11.376606 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f" containerName="extract-content" Feb 03 00:31:11 crc kubenswrapper[4798]: E0203 00:31:11.376619 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e40ff644-9cdc-4092-bb0e-bbf12f8675fd" containerName="extract-utilities" Feb 03 00:31:11 crc kubenswrapper[4798]: I0203 00:31:11.376627 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="e40ff644-9cdc-4092-bb0e-bbf12f8675fd" containerName="extract-utilities" Feb 03 00:31:11 crc kubenswrapper[4798]: E0203 00:31:11.376636 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e47a07f6-e4db-4607-b3b6-23c86aab8954" containerName="extract-content" Feb 03 00:31:11 crc kubenswrapper[4798]: I0203 00:31:11.376644 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="e47a07f6-e4db-4607-b3b6-23c86aab8954" containerName="extract-content" Feb 03 00:31:11 crc kubenswrapper[4798]: E0203 00:31:11.376673 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e40ff644-9cdc-4092-bb0e-bbf12f8675fd" containerName="extract-content" Feb 03 00:31:11 crc kubenswrapper[4798]: I0203 00:31:11.376681 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="e40ff644-9cdc-4092-bb0e-bbf12f8675fd" containerName="extract-content" Feb 03 00:31:11 crc kubenswrapper[4798]: E0203 00:31:11.376689 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e47a07f6-e4db-4607-b3b6-23c86aab8954" containerName="extract-utilities" Feb 03 00:31:11 crc kubenswrapper[4798]: I0203 00:31:11.376696 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="e47a07f6-e4db-4607-b3b6-23c86aab8954" containerName="extract-utilities" Feb 03 00:31:11 crc kubenswrapper[4798]: E0203 00:31:11.376712 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e47a07f6-e4db-4607-b3b6-23c86aab8954" containerName="registry-server" Feb 03 00:31:11 crc kubenswrapper[4798]: I0203 00:31:11.376720 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="e47a07f6-e4db-4607-b3b6-23c86aab8954" containerName="registry-server" Feb 03 00:31:11 crc kubenswrapper[4798]: I0203 00:31:11.376836 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="21efe8c6-cfa4-4cd5-bf5b-8d250a17b55f" containerName="registry-server" Feb 03 00:31:11 crc kubenswrapper[4798]: I0203 00:31:11.376860 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="e40ff644-9cdc-4092-bb0e-bbf12f8675fd" containerName="registry-server" Feb 03 00:31:11 crc kubenswrapper[4798]: I0203 00:31:11.376870 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="e47a07f6-e4db-4607-b3b6-23c86aab8954" containerName="registry-server" Feb 03 00:31:11 crc kubenswrapper[4798]: I0203 00:31:11.377375 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-gm9dt" Feb 03 00:31:11 crc kubenswrapper[4798]: I0203 00:31:11.380164 4798 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-users" Feb 03 00:31:11 crc kubenswrapper[4798]: I0203 00:31:11.381622 4798 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-ca" Feb 03 00:31:11 crc kubenswrapper[4798]: I0203 00:31:11.384077 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-interconnect-sasl-config" Feb 03 00:31:11 crc kubenswrapper[4798]: I0203 00:31:11.384462 4798 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-credentials" Feb 03 00:31:11 crc kubenswrapper[4798]: I0203 00:31:11.384808 4798 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-ca" Feb 03 00:31:11 crc kubenswrapper[4798]: I0203 00:31:11.386959 4798 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-credentials" Feb 03 00:31:11 crc kubenswrapper[4798]: I0203 00:31:11.387189 4798 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-dockercfg-rs2wt" Feb 03 00:31:11 crc kubenswrapper[4798]: I0203 00:31:11.405435 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-gm9dt"] Feb 03 00:31:11 crc kubenswrapper[4798]: I0203 00:31:11.411216 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/636c3318-c8fd-44c9-8501-544006efaddc-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-gm9dt\" (UID: \"636c3318-c8fd-44c9-8501-544006efaddc\") " pod="service-telemetry/default-interconnect-68864d46cb-gm9dt" Feb 03 00:31:11 crc kubenswrapper[4798]: I0203 00:31:11.411294 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/636c3318-c8fd-44c9-8501-544006efaddc-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-gm9dt\" (UID: \"636c3318-c8fd-44c9-8501-544006efaddc\") " pod="service-telemetry/default-interconnect-68864d46cb-gm9dt" Feb 03 00:31:11 crc kubenswrapper[4798]: I0203 00:31:11.411336 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/636c3318-c8fd-44c9-8501-544006efaddc-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-gm9dt\" (UID: \"636c3318-c8fd-44c9-8501-544006efaddc\") " pod="service-telemetry/default-interconnect-68864d46cb-gm9dt" Feb 03 00:31:11 crc kubenswrapper[4798]: I0203 00:31:11.411370 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/636c3318-c8fd-44c9-8501-544006efaddc-sasl-users\") pod \"default-interconnect-68864d46cb-gm9dt\" (UID: \"636c3318-c8fd-44c9-8501-544006efaddc\") " pod="service-telemetry/default-interconnect-68864d46cb-gm9dt" Feb 03 00:31:11 crc kubenswrapper[4798]: I0203 00:31:11.411394 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/636c3318-c8fd-44c9-8501-544006efaddc-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-gm9dt\" (UID: \"636c3318-c8fd-44c9-8501-544006efaddc\") " pod="service-telemetry/default-interconnect-68864d46cb-gm9dt" Feb 03 00:31:11 crc kubenswrapper[4798]: I0203 00:31:11.411426 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7tkb\" (UniqueName: \"kubernetes.io/projected/636c3318-c8fd-44c9-8501-544006efaddc-kube-api-access-t7tkb\") pod \"default-interconnect-68864d46cb-gm9dt\" (UID: \"636c3318-c8fd-44c9-8501-544006efaddc\") " pod="service-telemetry/default-interconnect-68864d46cb-gm9dt" Feb 03 00:31:11 crc kubenswrapper[4798]: I0203 00:31:11.411452 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/636c3318-c8fd-44c9-8501-544006efaddc-sasl-config\") pod \"default-interconnect-68864d46cb-gm9dt\" (UID: \"636c3318-c8fd-44c9-8501-544006efaddc\") " pod="service-telemetry/default-interconnect-68864d46cb-gm9dt" Feb 03 00:31:11 crc kubenswrapper[4798]: I0203 00:31:11.512414 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/636c3318-c8fd-44c9-8501-544006efaddc-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-gm9dt\" (UID: \"636c3318-c8fd-44c9-8501-544006efaddc\") " pod="service-telemetry/default-interconnect-68864d46cb-gm9dt" Feb 03 00:31:11 crc kubenswrapper[4798]: I0203 00:31:11.512533 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/636c3318-c8fd-44c9-8501-544006efaddc-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-gm9dt\" (UID: \"636c3318-c8fd-44c9-8501-544006efaddc\") " pod="service-telemetry/default-interconnect-68864d46cb-gm9dt" Feb 03 00:31:11 crc kubenswrapper[4798]: I0203 00:31:11.512586 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/636c3318-c8fd-44c9-8501-544006efaddc-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-gm9dt\" (UID: \"636c3318-c8fd-44c9-8501-544006efaddc\") " pod="service-telemetry/default-interconnect-68864d46cb-gm9dt" Feb 03 00:31:11 crc kubenswrapper[4798]: I0203 00:31:11.512625 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/636c3318-c8fd-44c9-8501-544006efaddc-sasl-users\") pod \"default-interconnect-68864d46cb-gm9dt\" (UID: \"636c3318-c8fd-44c9-8501-544006efaddc\") " pod="service-telemetry/default-interconnect-68864d46cb-gm9dt" Feb 03 00:31:11 crc kubenswrapper[4798]: I0203 00:31:11.512718 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/636c3318-c8fd-44c9-8501-544006efaddc-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-gm9dt\" (UID: \"636c3318-c8fd-44c9-8501-544006efaddc\") " pod="service-telemetry/default-interconnect-68864d46cb-gm9dt" Feb 03 00:31:11 crc kubenswrapper[4798]: I0203 00:31:11.512762 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7tkb\" (UniqueName: \"kubernetes.io/projected/636c3318-c8fd-44c9-8501-544006efaddc-kube-api-access-t7tkb\") pod \"default-interconnect-68864d46cb-gm9dt\" (UID: \"636c3318-c8fd-44c9-8501-544006efaddc\") " pod="service-telemetry/default-interconnect-68864d46cb-gm9dt" Feb 03 00:31:11 crc kubenswrapper[4798]: I0203 00:31:11.512787 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/636c3318-c8fd-44c9-8501-544006efaddc-sasl-config\") pod \"default-interconnect-68864d46cb-gm9dt\" (UID: \"636c3318-c8fd-44c9-8501-544006efaddc\") " pod="service-telemetry/default-interconnect-68864d46cb-gm9dt" Feb 03 00:31:11 crc kubenswrapper[4798]: I0203 00:31:11.519323 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/636c3318-c8fd-44c9-8501-544006efaddc-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-gm9dt\" (UID: \"636c3318-c8fd-44c9-8501-544006efaddc\") " pod="service-telemetry/default-interconnect-68864d46cb-gm9dt" Feb 03 00:31:11 crc kubenswrapper[4798]: I0203 00:31:11.519380 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/636c3318-c8fd-44c9-8501-544006efaddc-sasl-users\") pod \"default-interconnect-68864d46cb-gm9dt\" (UID: \"636c3318-c8fd-44c9-8501-544006efaddc\") " pod="service-telemetry/default-interconnect-68864d46cb-gm9dt" Feb 03 00:31:11 crc kubenswrapper[4798]: I0203 00:31:11.519380 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/636c3318-c8fd-44c9-8501-544006efaddc-sasl-config\") pod \"default-interconnect-68864d46cb-gm9dt\" (UID: \"636c3318-c8fd-44c9-8501-544006efaddc\") " pod="service-telemetry/default-interconnect-68864d46cb-gm9dt" Feb 03 00:31:11 crc kubenswrapper[4798]: I0203 00:31:11.519618 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/636c3318-c8fd-44c9-8501-544006efaddc-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-gm9dt\" (UID: \"636c3318-c8fd-44c9-8501-544006efaddc\") " pod="service-telemetry/default-interconnect-68864d46cb-gm9dt" Feb 03 00:31:11 crc kubenswrapper[4798]: I0203 00:31:11.530447 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/636c3318-c8fd-44c9-8501-544006efaddc-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-gm9dt\" (UID: \"636c3318-c8fd-44c9-8501-544006efaddc\") " pod="service-telemetry/default-interconnect-68864d46cb-gm9dt" Feb 03 00:31:11 crc kubenswrapper[4798]: I0203 00:31:11.530533 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/636c3318-c8fd-44c9-8501-544006efaddc-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-gm9dt\" (UID: \"636c3318-c8fd-44c9-8501-544006efaddc\") " pod="service-telemetry/default-interconnect-68864d46cb-gm9dt" Feb 03 00:31:11 crc kubenswrapper[4798]: I0203 00:31:11.541743 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7tkb\" (UniqueName: \"kubernetes.io/projected/636c3318-c8fd-44c9-8501-544006efaddc-kube-api-access-t7tkb\") pod \"default-interconnect-68864d46cb-gm9dt\" (UID: \"636c3318-c8fd-44c9-8501-544006efaddc\") " pod="service-telemetry/default-interconnect-68864d46cb-gm9dt" Feb 03 00:31:11 crc kubenswrapper[4798]: I0203 00:31:11.701294 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-gm9dt" Feb 03 00:31:12 crc kubenswrapper[4798]: I0203 00:31:12.119015 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-gm9dt"] Feb 03 00:31:12 crc kubenswrapper[4798]: I0203 00:31:12.704970 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-gm9dt" event={"ID":"636c3318-c8fd-44c9-8501-544006efaddc","Type":"ContainerStarted","Data":"109234d20bb46cd4bc9abb50f5fc1afb0e56ca741d4c63f788bd7dad884d28d7"} Feb 03 00:31:17 crc kubenswrapper[4798]: I0203 00:31:17.769397 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-gm9dt" event={"ID":"636c3318-c8fd-44c9-8501-544006efaddc","Type":"ContainerStarted","Data":"831d2ff9c602e05156ffa8ae614b55629ddb3e34bac19abfa8eafe1fa3a386bf"} Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.752045 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-68864d46cb-gm9dt" podStartSLOduration=5.525317018 podStartE2EDuration="10.752024049s" podCreationTimestamp="2026-02-03 00:31:11 +0000 UTC" firstStartedPulling="2026-02-03 00:31:12.123292336 +0000 UTC m=+963.889282347" lastFinishedPulling="2026-02-03 00:31:17.349999367 +0000 UTC m=+969.115989378" observedRunningTime="2026-02-03 00:31:17.794617212 +0000 UTC m=+969.560607233" watchObservedRunningTime="2026-02-03 00:31:21.752024049 +0000 UTC m=+973.518014060" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.754530 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.755934 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.757547 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"serving-certs-ca-bundle" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.758166 4798 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.759402 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-0" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.759407 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-1" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.759430 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-2" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.759558 4798 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-stf-dockercfg-qdqj2" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.759571 4798 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-prometheus-proxy-tls" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.759787 4798 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-web-config" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.759803 4798 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-tls-assets-0" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.760147 4798 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-session-secret" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.768763 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fb8978b7-2e43-4d75-a710-00e122a6f9a7-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"fb8978b7-2e43-4d75-a710-00e122a6f9a7\") " pod="service-telemetry/prometheus-default-0" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.768814 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/fb8978b7-2e43-4d75-a710-00e122a6f9a7-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"fb8978b7-2e43-4d75-a710-00e122a6f9a7\") " pod="service-telemetry/prometheus-default-0" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.768845 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fb8978b7-2e43-4d75-a710-00e122a6f9a7-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"fb8978b7-2e43-4d75-a710-00e122a6f9a7\") " pod="service-telemetry/prometheus-default-0" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.768871 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fb8978b7-2e43-4d75-a710-00e122a6f9a7-web-config\") pod \"prometheus-default-0\" (UID: \"fb8978b7-2e43-4d75-a710-00e122a6f9a7\") " pod="service-telemetry/prometheus-default-0" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.768895 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb8978b7-2e43-4d75-a710-00e122a6f9a7-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"fb8978b7-2e43-4d75-a710-00e122a6f9a7\") " pod="service-telemetry/prometheus-default-0" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.768914 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fb8978b7-2e43-4d75-a710-00e122a6f9a7-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"fb8978b7-2e43-4d75-a710-00e122a6f9a7\") " pod="service-telemetry/prometheus-default-0" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.768946 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fb33f6b7-2ee2-48a4-88c2-38d6c1ea1d6c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fb33f6b7-2ee2-48a4-88c2-38d6c1ea1d6c\") pod \"prometheus-default-0\" (UID: \"fb8978b7-2e43-4d75-a710-00e122a6f9a7\") " pod="service-telemetry/prometheus-default-0" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.768969 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbslv\" (UniqueName: \"kubernetes.io/projected/fb8978b7-2e43-4d75-a710-00e122a6f9a7-kube-api-access-hbslv\") pod \"prometheus-default-0\" (UID: \"fb8978b7-2e43-4d75-a710-00e122a6f9a7\") " pod="service-telemetry/prometheus-default-0" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.769045 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fb8978b7-2e43-4d75-a710-00e122a6f9a7-config-out\") pod \"prometheus-default-0\" (UID: \"fb8978b7-2e43-4d75-a710-00e122a6f9a7\") " pod="service-telemetry/prometheus-default-0" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.769103 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb8978b7-2e43-4d75-a710-00e122a6f9a7-config\") pod \"prometheus-default-0\" (UID: \"fb8978b7-2e43-4d75-a710-00e122a6f9a7\") " pod="service-telemetry/prometheus-default-0" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.769144 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fb8978b7-2e43-4d75-a710-00e122a6f9a7-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"fb8978b7-2e43-4d75-a710-00e122a6f9a7\") " pod="service-telemetry/prometheus-default-0" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.769176 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fb8978b7-2e43-4d75-a710-00e122a6f9a7-tls-assets\") pod \"prometheus-default-0\" (UID: \"fb8978b7-2e43-4d75-a710-00e122a6f9a7\") " pod="service-telemetry/prometheus-default-0" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.781873 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.870458 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fb8978b7-2e43-4d75-a710-00e122a6f9a7-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"fb8978b7-2e43-4d75-a710-00e122a6f9a7\") " pod="service-telemetry/prometheus-default-0" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.871017 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/fb8978b7-2e43-4d75-a710-00e122a6f9a7-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"fb8978b7-2e43-4d75-a710-00e122a6f9a7\") " pod="service-telemetry/prometheus-default-0" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.871129 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fb8978b7-2e43-4d75-a710-00e122a6f9a7-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"fb8978b7-2e43-4d75-a710-00e122a6f9a7\") " pod="service-telemetry/prometheus-default-0" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.871216 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fb8978b7-2e43-4d75-a710-00e122a6f9a7-web-config\") pod \"prometheus-default-0\" (UID: \"fb8978b7-2e43-4d75-a710-00e122a6f9a7\") " pod="service-telemetry/prometheus-default-0" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.871295 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb8978b7-2e43-4d75-a710-00e122a6f9a7-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"fb8978b7-2e43-4d75-a710-00e122a6f9a7\") " pod="service-telemetry/prometheus-default-0" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.871375 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fb8978b7-2e43-4d75-a710-00e122a6f9a7-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"fb8978b7-2e43-4d75-a710-00e122a6f9a7\") " pod="service-telemetry/prometheus-default-0" Feb 03 00:31:21 crc kubenswrapper[4798]: E0203 00:31:21.871312 4798 secret.go:188] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.871445 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbslv\" (UniqueName: \"kubernetes.io/projected/fb8978b7-2e43-4d75-a710-00e122a6f9a7-kube-api-access-hbslv\") pod \"prometheus-default-0\" (UID: \"fb8978b7-2e43-4d75-a710-00e122a6f9a7\") " pod="service-telemetry/prometheus-default-0" Feb 03 00:31:21 crc kubenswrapper[4798]: E0203 00:31:21.871562 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fb8978b7-2e43-4d75-a710-00e122a6f9a7-secret-default-prometheus-proxy-tls podName:fb8978b7-2e43-4d75-a710-00e122a6f9a7 nodeName:}" failed. No retries permitted until 2026-02-03 00:31:22.371507302 +0000 UTC m=+974.137497313 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/fb8978b7-2e43-4d75-a710-00e122a6f9a7-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "fb8978b7-2e43-4d75-a710-00e122a6f9a7") : secret "default-prometheus-proxy-tls" not found Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.871603 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-fb33f6b7-2ee2-48a4-88c2-38d6c1ea1d6c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fb33f6b7-2ee2-48a4-88c2-38d6c1ea1d6c\") pod \"prometheus-default-0\" (UID: \"fb8978b7-2e43-4d75-a710-00e122a6f9a7\") " pod="service-telemetry/prometheus-default-0" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.871701 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fb8978b7-2e43-4d75-a710-00e122a6f9a7-config-out\") pod \"prometheus-default-0\" (UID: \"fb8978b7-2e43-4d75-a710-00e122a6f9a7\") " pod="service-telemetry/prometheus-default-0" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.872104 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb8978b7-2e43-4d75-a710-00e122a6f9a7-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"fb8978b7-2e43-4d75-a710-00e122a6f9a7\") " pod="service-telemetry/prometheus-default-0" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.872241 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fb8978b7-2e43-4d75-a710-00e122a6f9a7-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"fb8978b7-2e43-4d75-a710-00e122a6f9a7\") " pod="service-telemetry/prometheus-default-0" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.872431 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fb8978b7-2e43-4d75-a710-00e122a6f9a7-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"fb8978b7-2e43-4d75-a710-00e122a6f9a7\") " pod="service-telemetry/prometheus-default-0" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.876204 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb8978b7-2e43-4d75-a710-00e122a6f9a7-config\") pod \"prometheus-default-0\" (UID: \"fb8978b7-2e43-4d75-a710-00e122a6f9a7\") " pod="service-telemetry/prometheus-default-0" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.876275 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fb8978b7-2e43-4d75-a710-00e122a6f9a7-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"fb8978b7-2e43-4d75-a710-00e122a6f9a7\") " pod="service-telemetry/prometheus-default-0" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.876408 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fb8978b7-2e43-4d75-a710-00e122a6f9a7-tls-assets\") pod \"prometheus-default-0\" (UID: \"fb8978b7-2e43-4d75-a710-00e122a6f9a7\") " pod="service-telemetry/prometheus-default-0" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.877362 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fb8978b7-2e43-4d75-a710-00e122a6f9a7-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"fb8978b7-2e43-4d75-a710-00e122a6f9a7\") " pod="service-telemetry/prometheus-default-0" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.880228 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fb8978b7-2e43-4d75-a710-00e122a6f9a7-tls-assets\") pod \"prometheus-default-0\" (UID: \"fb8978b7-2e43-4d75-a710-00e122a6f9a7\") " pod="service-telemetry/prometheus-default-0" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.880354 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/fb8978b7-2e43-4d75-a710-00e122a6f9a7-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"fb8978b7-2e43-4d75-a710-00e122a6f9a7\") " pod="service-telemetry/prometheus-default-0" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.880383 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb8978b7-2e43-4d75-a710-00e122a6f9a7-config\") pod \"prometheus-default-0\" (UID: \"fb8978b7-2e43-4d75-a710-00e122a6f9a7\") " pod="service-telemetry/prometheus-default-0" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.880440 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fb8978b7-2e43-4d75-a710-00e122a6f9a7-web-config\") pod \"prometheus-default-0\" (UID: \"fb8978b7-2e43-4d75-a710-00e122a6f9a7\") " pod="service-telemetry/prometheus-default-0" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.881902 4798 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.881937 4798 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-fb33f6b7-2ee2-48a4-88c2-38d6c1ea1d6c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fb33f6b7-2ee2-48a4-88c2-38d6c1ea1d6c\") pod \"prometheus-default-0\" (UID: \"fb8978b7-2e43-4d75-a710-00e122a6f9a7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/855d004eaf262aaa780a0fdcb8d601c13b5942b32c357504c43c52155538d4df/globalmount\"" pod="service-telemetry/prometheus-default-0" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.894926 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fb8978b7-2e43-4d75-a710-00e122a6f9a7-config-out\") pod \"prometheus-default-0\" (UID: \"fb8978b7-2e43-4d75-a710-00e122a6f9a7\") " pod="service-telemetry/prometheus-default-0" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.895281 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbslv\" (UniqueName: \"kubernetes.io/projected/fb8978b7-2e43-4d75-a710-00e122a6f9a7-kube-api-access-hbslv\") pod \"prometheus-default-0\" (UID: \"fb8978b7-2e43-4d75-a710-00e122a6f9a7\") " pod="service-telemetry/prometheus-default-0" Feb 03 00:31:21 crc kubenswrapper[4798]: I0203 00:31:21.910866 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-fb33f6b7-2ee2-48a4-88c2-38d6c1ea1d6c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fb33f6b7-2ee2-48a4-88c2-38d6c1ea1d6c\") pod \"prometheus-default-0\" (UID: \"fb8978b7-2e43-4d75-a710-00e122a6f9a7\") " pod="service-telemetry/prometheus-default-0" Feb 03 00:31:22 crc kubenswrapper[4798]: I0203 00:31:22.382951 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fb8978b7-2e43-4d75-a710-00e122a6f9a7-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"fb8978b7-2e43-4d75-a710-00e122a6f9a7\") " pod="service-telemetry/prometheus-default-0" Feb 03 00:31:22 crc kubenswrapper[4798]: E0203 00:31:22.383149 4798 secret.go:188] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Feb 03 00:31:22 crc kubenswrapper[4798]: E0203 00:31:22.383224 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fb8978b7-2e43-4d75-a710-00e122a6f9a7-secret-default-prometheus-proxy-tls podName:fb8978b7-2e43-4d75-a710-00e122a6f9a7 nodeName:}" failed. No retries permitted until 2026-02-03 00:31:23.383206141 +0000 UTC m=+975.149196152 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/fb8978b7-2e43-4d75-a710-00e122a6f9a7-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "fb8978b7-2e43-4d75-a710-00e122a6f9a7") : secret "default-prometheus-proxy-tls" not found Feb 03 00:31:23 crc kubenswrapper[4798]: I0203 00:31:23.396314 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fb8978b7-2e43-4d75-a710-00e122a6f9a7-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"fb8978b7-2e43-4d75-a710-00e122a6f9a7\") " pod="service-telemetry/prometheus-default-0" Feb 03 00:31:23 crc kubenswrapper[4798]: I0203 00:31:23.408255 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fb8978b7-2e43-4d75-a710-00e122a6f9a7-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"fb8978b7-2e43-4d75-a710-00e122a6f9a7\") " pod="service-telemetry/prometheus-default-0" Feb 03 00:31:23 crc kubenswrapper[4798]: I0203 00:31:23.573387 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Feb 03 00:31:23 crc kubenswrapper[4798]: I0203 00:31:23.981547 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 03 00:31:24 crc kubenswrapper[4798]: I0203 00:31:24.816464 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"fb8978b7-2e43-4d75-a710-00e122a6f9a7","Type":"ContainerStarted","Data":"4cd3a58e50ffaf588505e0fe21df85a4f26c98ef2dbf86802fd7fca8db1cdff9"} Feb 03 00:31:28 crc kubenswrapper[4798]: I0203 00:31:28.845830 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"fb8978b7-2e43-4d75-a710-00e122a6f9a7","Type":"ContainerStarted","Data":"fa1444afb46f6a0467870d49dfc360807fe02440a03609209a400d07e5524473"} Feb 03 00:31:31 crc kubenswrapper[4798]: I0203 00:31:31.577039 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-78bcbbdcff-wwz4f"] Feb 03 00:31:31 crc kubenswrapper[4798]: I0203 00:31:31.578247 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-78bcbbdcff-wwz4f" Feb 03 00:31:31 crc kubenswrapper[4798]: I0203 00:31:31.585108 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-78bcbbdcff-wwz4f"] Feb 03 00:31:31 crc kubenswrapper[4798]: I0203 00:31:31.712099 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjbxh\" (UniqueName: \"kubernetes.io/projected/9513eb34-b9eb-4ea3-b49f-4c468e08a5d5-kube-api-access-gjbxh\") pod \"default-snmp-webhook-78bcbbdcff-wwz4f\" (UID: \"9513eb34-b9eb-4ea3-b49f-4c468e08a5d5\") " pod="service-telemetry/default-snmp-webhook-78bcbbdcff-wwz4f" Feb 03 00:31:31 crc kubenswrapper[4798]: I0203 00:31:31.814259 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjbxh\" (UniqueName: \"kubernetes.io/projected/9513eb34-b9eb-4ea3-b49f-4c468e08a5d5-kube-api-access-gjbxh\") pod \"default-snmp-webhook-78bcbbdcff-wwz4f\" (UID: \"9513eb34-b9eb-4ea3-b49f-4c468e08a5d5\") " pod="service-telemetry/default-snmp-webhook-78bcbbdcff-wwz4f" Feb 03 00:31:31 crc kubenswrapper[4798]: I0203 00:31:31.845396 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjbxh\" (UniqueName: \"kubernetes.io/projected/9513eb34-b9eb-4ea3-b49f-4c468e08a5d5-kube-api-access-gjbxh\") pod \"default-snmp-webhook-78bcbbdcff-wwz4f\" (UID: \"9513eb34-b9eb-4ea3-b49f-4c468e08a5d5\") " pod="service-telemetry/default-snmp-webhook-78bcbbdcff-wwz4f" Feb 03 00:31:31 crc kubenswrapper[4798]: I0203 00:31:31.893739 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-78bcbbdcff-wwz4f" Feb 03 00:31:32 crc kubenswrapper[4798]: I0203 00:31:32.298521 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-78bcbbdcff-wwz4f"] Feb 03 00:31:32 crc kubenswrapper[4798]: W0203 00:31:32.302574 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9513eb34_b9eb_4ea3_b49f_4c468e08a5d5.slice/crio-961d2ebd395fb79fea7f8bd3cdb6c809e4509da894493bb5525304bcdbe3b293 WatchSource:0}: Error finding container 961d2ebd395fb79fea7f8bd3cdb6c809e4509da894493bb5525304bcdbe3b293: Status 404 returned error can't find the container with id 961d2ebd395fb79fea7f8bd3cdb6c809e4509da894493bb5525304bcdbe3b293 Feb 03 00:31:32 crc kubenswrapper[4798]: I0203 00:31:32.873838 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-78bcbbdcff-wwz4f" event={"ID":"9513eb34-b9eb-4ea3-b49f-4c468e08a5d5","Type":"ContainerStarted","Data":"961d2ebd395fb79fea7f8bd3cdb6c809e4509da894493bb5525304bcdbe3b293"} Feb 03 00:31:35 crc kubenswrapper[4798]: I0203 00:31:35.117923 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 03 00:31:35 crc kubenswrapper[4798]: I0203 00:31:35.121177 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Feb 03 00:31:35 crc kubenswrapper[4798]: I0203 00:31:35.124685 4798 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-web-config" Feb 03 00:31:35 crc kubenswrapper[4798]: I0203 00:31:35.124874 4798 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-cluster-tls-config" Feb 03 00:31:35 crc kubenswrapper[4798]: I0203 00:31:35.124979 4798 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-alertmanager-proxy-tls" Feb 03 00:31:35 crc kubenswrapper[4798]: I0203 00:31:35.125131 4798 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-generated" Feb 03 00:31:35 crc kubenswrapper[4798]: I0203 00:31:35.125366 4798 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-stf-dockercfg-596kt" Feb 03 00:31:35 crc kubenswrapper[4798]: I0203 00:31:35.125503 4798 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-tls-assets-0" Feb 03 00:31:35 crc kubenswrapper[4798]: I0203 00:31:35.130433 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 03 00:31:35 crc kubenswrapper[4798]: I0203 00:31:35.264001 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/2067d204-5ad9-43a2-9233-ee24c671e516-config-volume\") pod \"alertmanager-default-0\" (UID: \"2067d204-5ad9-43a2-9233-ee24c671e516\") " pod="service-telemetry/alertmanager-default-0" Feb 03 00:31:35 crc kubenswrapper[4798]: I0203 00:31:35.264134 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2067d204-5ad9-43a2-9233-ee24c671e516-tls-assets\") pod \"alertmanager-default-0\" (UID: \"2067d204-5ad9-43a2-9233-ee24c671e516\") " pod="service-telemetry/alertmanager-default-0" Feb 03 00:31:35 crc kubenswrapper[4798]: I0203 00:31:35.264216 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2067d204-5ad9-43a2-9233-ee24c671e516-config-out\") pod \"alertmanager-default-0\" (UID: \"2067d204-5ad9-43a2-9233-ee24c671e516\") " pod="service-telemetry/alertmanager-default-0" Feb 03 00:31:35 crc kubenswrapper[4798]: I0203 00:31:35.264255 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/2067d204-5ad9-43a2-9233-ee24c671e516-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"2067d204-5ad9-43a2-9233-ee24c671e516\") " pod="service-telemetry/alertmanager-default-0" Feb 03 00:31:35 crc kubenswrapper[4798]: I0203 00:31:35.264294 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2067d204-5ad9-43a2-9233-ee24c671e516-web-config\") pod \"alertmanager-default-0\" (UID: \"2067d204-5ad9-43a2-9233-ee24c671e516\") " pod="service-telemetry/alertmanager-default-0" Feb 03 00:31:35 crc kubenswrapper[4798]: I0203 00:31:35.264361 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/2067d204-5ad9-43a2-9233-ee24c671e516-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"2067d204-5ad9-43a2-9233-ee24c671e516\") " pod="service-telemetry/alertmanager-default-0" Feb 03 00:31:35 crc kubenswrapper[4798]: I0203 00:31:35.264424 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/2067d204-5ad9-43a2-9233-ee24c671e516-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"2067d204-5ad9-43a2-9233-ee24c671e516\") " pod="service-telemetry/alertmanager-default-0" Feb 03 00:31:35 crc kubenswrapper[4798]: I0203 00:31:35.264445 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp5sx\" (UniqueName: \"kubernetes.io/projected/2067d204-5ad9-43a2-9233-ee24c671e516-kube-api-access-pp5sx\") pod \"alertmanager-default-0\" (UID: \"2067d204-5ad9-43a2-9233-ee24c671e516\") " pod="service-telemetry/alertmanager-default-0" Feb 03 00:31:35 crc kubenswrapper[4798]: I0203 00:31:35.264502 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cdc315eb-3a8e-4529-9422-d35df819fdbf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cdc315eb-3a8e-4529-9422-d35df819fdbf\") pod \"alertmanager-default-0\" (UID: \"2067d204-5ad9-43a2-9233-ee24c671e516\") " pod="service-telemetry/alertmanager-default-0" Feb 03 00:31:35 crc kubenswrapper[4798]: I0203 00:31:35.366043 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2067d204-5ad9-43a2-9233-ee24c671e516-web-config\") pod \"alertmanager-default-0\" (UID: \"2067d204-5ad9-43a2-9233-ee24c671e516\") " pod="service-telemetry/alertmanager-default-0" Feb 03 00:31:35 crc kubenswrapper[4798]: I0203 00:31:35.366094 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/2067d204-5ad9-43a2-9233-ee24c671e516-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"2067d204-5ad9-43a2-9233-ee24c671e516\") " pod="service-telemetry/alertmanager-default-0" Feb 03 00:31:35 crc kubenswrapper[4798]: I0203 00:31:35.366125 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/2067d204-5ad9-43a2-9233-ee24c671e516-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"2067d204-5ad9-43a2-9233-ee24c671e516\") " pod="service-telemetry/alertmanager-default-0" Feb 03 00:31:35 crc kubenswrapper[4798]: I0203 00:31:35.366148 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp5sx\" (UniqueName: \"kubernetes.io/projected/2067d204-5ad9-43a2-9233-ee24c671e516-kube-api-access-pp5sx\") pod \"alertmanager-default-0\" (UID: \"2067d204-5ad9-43a2-9233-ee24c671e516\") " pod="service-telemetry/alertmanager-default-0" Feb 03 00:31:35 crc kubenswrapper[4798]: I0203 00:31:35.366175 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cdc315eb-3a8e-4529-9422-d35df819fdbf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cdc315eb-3a8e-4529-9422-d35df819fdbf\") pod \"alertmanager-default-0\" (UID: \"2067d204-5ad9-43a2-9233-ee24c671e516\") " pod="service-telemetry/alertmanager-default-0" Feb 03 00:31:35 crc kubenswrapper[4798]: I0203 00:31:35.366197 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/2067d204-5ad9-43a2-9233-ee24c671e516-config-volume\") pod \"alertmanager-default-0\" (UID: \"2067d204-5ad9-43a2-9233-ee24c671e516\") " pod="service-telemetry/alertmanager-default-0" Feb 03 00:31:35 crc kubenswrapper[4798]: I0203 00:31:35.366215 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2067d204-5ad9-43a2-9233-ee24c671e516-tls-assets\") pod \"alertmanager-default-0\" (UID: \"2067d204-5ad9-43a2-9233-ee24c671e516\") " pod="service-telemetry/alertmanager-default-0" Feb 03 00:31:35 crc kubenswrapper[4798]: I0203 00:31:35.366242 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2067d204-5ad9-43a2-9233-ee24c671e516-config-out\") pod \"alertmanager-default-0\" (UID: \"2067d204-5ad9-43a2-9233-ee24c671e516\") " pod="service-telemetry/alertmanager-default-0" Feb 03 00:31:35 crc kubenswrapper[4798]: I0203 00:31:35.366268 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/2067d204-5ad9-43a2-9233-ee24c671e516-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"2067d204-5ad9-43a2-9233-ee24c671e516\") " pod="service-telemetry/alertmanager-default-0" Feb 03 00:31:35 crc kubenswrapper[4798]: E0203 00:31:35.368384 4798 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 03 00:31:35 crc kubenswrapper[4798]: E0203 00:31:35.368474 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2067d204-5ad9-43a2-9233-ee24c671e516-secret-default-alertmanager-proxy-tls podName:2067d204-5ad9-43a2-9233-ee24c671e516 nodeName:}" failed. No retries permitted until 2026-02-03 00:31:35.868453546 +0000 UTC m=+987.634443557 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/2067d204-5ad9-43a2-9233-ee24c671e516-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "2067d204-5ad9-43a2-9233-ee24c671e516") : secret "default-alertmanager-proxy-tls" not found Feb 03 00:31:35 crc kubenswrapper[4798]: I0203 00:31:35.372056 4798 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 03 00:31:35 crc kubenswrapper[4798]: I0203 00:31:35.372194 4798 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cdc315eb-3a8e-4529-9422-d35df819fdbf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cdc315eb-3a8e-4529-9422-d35df819fdbf\") pod \"alertmanager-default-0\" (UID: \"2067d204-5ad9-43a2-9233-ee24c671e516\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/32049845e36df77c1340e207dcd0e931f80c1f916c3f38d3643cc6cc0f3d5117/globalmount\"" pod="service-telemetry/alertmanager-default-0" Feb 03 00:31:35 crc kubenswrapper[4798]: I0203 00:31:35.374129 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2067d204-5ad9-43a2-9233-ee24c671e516-tls-assets\") pod \"alertmanager-default-0\" (UID: \"2067d204-5ad9-43a2-9233-ee24c671e516\") " pod="service-telemetry/alertmanager-default-0" Feb 03 00:31:35 crc kubenswrapper[4798]: I0203 00:31:35.374537 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/2067d204-5ad9-43a2-9233-ee24c671e516-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"2067d204-5ad9-43a2-9233-ee24c671e516\") " pod="service-telemetry/alertmanager-default-0" Feb 03 00:31:35 crc kubenswrapper[4798]: I0203 00:31:35.375380 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/2067d204-5ad9-43a2-9233-ee24c671e516-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"2067d204-5ad9-43a2-9233-ee24c671e516\") " pod="service-telemetry/alertmanager-default-0" Feb 03 00:31:35 crc kubenswrapper[4798]: I0203 00:31:35.376421 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2067d204-5ad9-43a2-9233-ee24c671e516-web-config\") pod \"alertmanager-default-0\" (UID: \"2067d204-5ad9-43a2-9233-ee24c671e516\") " pod="service-telemetry/alertmanager-default-0" Feb 03 00:31:35 crc kubenswrapper[4798]: I0203 00:31:35.385118 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/2067d204-5ad9-43a2-9233-ee24c671e516-config-volume\") pod \"alertmanager-default-0\" (UID: \"2067d204-5ad9-43a2-9233-ee24c671e516\") " pod="service-telemetry/alertmanager-default-0" Feb 03 00:31:35 crc kubenswrapper[4798]: I0203 00:31:35.390563 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2067d204-5ad9-43a2-9233-ee24c671e516-config-out\") pod \"alertmanager-default-0\" (UID: \"2067d204-5ad9-43a2-9233-ee24c671e516\") " pod="service-telemetry/alertmanager-default-0" Feb 03 00:31:35 crc kubenswrapper[4798]: I0203 00:31:35.392297 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp5sx\" (UniqueName: \"kubernetes.io/projected/2067d204-5ad9-43a2-9233-ee24c671e516-kube-api-access-pp5sx\") pod \"alertmanager-default-0\" (UID: \"2067d204-5ad9-43a2-9233-ee24c671e516\") " pod="service-telemetry/alertmanager-default-0" Feb 03 00:31:35 crc kubenswrapper[4798]: I0203 00:31:35.399071 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cdc315eb-3a8e-4529-9422-d35df819fdbf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cdc315eb-3a8e-4529-9422-d35df819fdbf\") pod \"alertmanager-default-0\" (UID: \"2067d204-5ad9-43a2-9233-ee24c671e516\") " pod="service-telemetry/alertmanager-default-0" Feb 03 00:31:35 crc kubenswrapper[4798]: I0203 00:31:35.875259 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/2067d204-5ad9-43a2-9233-ee24c671e516-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"2067d204-5ad9-43a2-9233-ee24c671e516\") " pod="service-telemetry/alertmanager-default-0" Feb 03 00:31:35 crc kubenswrapper[4798]: E0203 00:31:35.875406 4798 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 03 00:31:35 crc kubenswrapper[4798]: E0203 00:31:35.875484 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2067d204-5ad9-43a2-9233-ee24c671e516-secret-default-alertmanager-proxy-tls podName:2067d204-5ad9-43a2-9233-ee24c671e516 nodeName:}" failed. No retries permitted until 2026-02-03 00:31:36.87546297 +0000 UTC m=+988.641452991 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/2067d204-5ad9-43a2-9233-ee24c671e516-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "2067d204-5ad9-43a2-9233-ee24c671e516") : secret "default-alertmanager-proxy-tls" not found Feb 03 00:31:35 crc kubenswrapper[4798]: I0203 00:31:35.895349 4798 generic.go:334] "Generic (PLEG): container finished" podID="fb8978b7-2e43-4d75-a710-00e122a6f9a7" containerID="fa1444afb46f6a0467870d49dfc360807fe02440a03609209a400d07e5524473" exitCode=0 Feb 03 00:31:35 crc kubenswrapper[4798]: I0203 00:31:35.895424 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"fb8978b7-2e43-4d75-a710-00e122a6f9a7","Type":"ContainerDied","Data":"fa1444afb46f6a0467870d49dfc360807fe02440a03609209a400d07e5524473"} Feb 03 00:31:36 crc kubenswrapper[4798]: I0203 00:31:36.889672 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/2067d204-5ad9-43a2-9233-ee24c671e516-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"2067d204-5ad9-43a2-9233-ee24c671e516\") " pod="service-telemetry/alertmanager-default-0" Feb 03 00:31:36 crc kubenswrapper[4798]: E0203 00:31:36.889917 4798 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 03 00:31:36 crc kubenswrapper[4798]: E0203 00:31:36.890015 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2067d204-5ad9-43a2-9233-ee24c671e516-secret-default-alertmanager-proxy-tls podName:2067d204-5ad9-43a2-9233-ee24c671e516 nodeName:}" failed. No retries permitted until 2026-02-03 00:31:38.889995745 +0000 UTC m=+990.655985756 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/2067d204-5ad9-43a2-9233-ee24c671e516-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "2067d204-5ad9-43a2-9233-ee24c671e516") : secret "default-alertmanager-proxy-tls" not found Feb 03 00:31:38 crc kubenswrapper[4798]: I0203 00:31:38.947972 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/2067d204-5ad9-43a2-9233-ee24c671e516-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"2067d204-5ad9-43a2-9233-ee24c671e516\") " pod="service-telemetry/alertmanager-default-0" Feb 03 00:31:38 crc kubenswrapper[4798]: I0203 00:31:38.953553 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/2067d204-5ad9-43a2-9233-ee24c671e516-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"2067d204-5ad9-43a2-9233-ee24c671e516\") " pod="service-telemetry/alertmanager-default-0" Feb 03 00:31:39 crc kubenswrapper[4798]: I0203 00:31:39.047825 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Feb 03 00:31:43 crc kubenswrapper[4798]: I0203 00:31:43.530635 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 03 00:31:43 crc kubenswrapper[4798]: I0203 00:31:43.866710 4798 patch_prober.go:28] interesting pod/machine-config-daemon-b842j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 00:31:43 crc kubenswrapper[4798]: I0203 00:31:43.866780 4798 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b842j" podUID="c6602c86-f236-4772-b70f-a8b4847b95dd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 00:31:43 crc kubenswrapper[4798]: I0203 00:31:43.971174 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-78bcbbdcff-wwz4f" event={"ID":"9513eb34-b9eb-4ea3-b49f-4c468e08a5d5","Type":"ContainerStarted","Data":"7a6b002c784dc84261d1ea92ea63076f2c9bedb2e2d7b5fea99748ac239861c0"} Feb 03 00:31:43 crc kubenswrapper[4798]: I0203 00:31:43.976146 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"2067d204-5ad9-43a2-9233-ee24c671e516","Type":"ContainerStarted","Data":"0f5ef0fec3e55ebc0bfbc04768ce1a4df46b73cf20c02a93900ca6edbbb61125"} Feb 03 00:31:43 crc kubenswrapper[4798]: I0203 00:31:43.989928 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-78bcbbdcff-wwz4f" podStartSLOduration=1.7164630920000001 podStartE2EDuration="12.98988727s" podCreationTimestamp="2026-02-03 00:31:31 +0000 UTC" firstStartedPulling="2026-02-03 00:31:32.304600305 +0000 UTC m=+984.070590316" lastFinishedPulling="2026-02-03 00:31:43.578024473 +0000 UTC m=+995.344014494" observedRunningTime="2026-02-03 00:31:43.985999889 +0000 UTC m=+995.751989910" watchObservedRunningTime="2026-02-03 00:31:43.98988727 +0000 UTC m=+995.755877281" Feb 03 00:31:46 crc kubenswrapper[4798]: I0203 00:31:46.001927 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"2067d204-5ad9-43a2-9233-ee24c671e516","Type":"ContainerStarted","Data":"c3f6ad26f30eea33a36a182f754d84d877320a3e26b9ffe01c35cd0a9020e2d6"} Feb 03 00:31:49 crc kubenswrapper[4798]: I0203 00:31:49.040310 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"fb8978b7-2e43-4d75-a710-00e122a6f9a7","Type":"ContainerStarted","Data":"d6dfe0776d8183b0267f3396acf5a8c6c23288f0aff83a6fb7e16364475f5418"} Feb 03 00:31:49 crc kubenswrapper[4798]: I0203 00:31:49.773209 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh"] Feb 03 00:31:49 crc kubenswrapper[4798]: I0203 00:31:49.774450 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh" Feb 03 00:31:49 crc kubenswrapper[4798]: I0203 00:31:49.777489 4798 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-session-secret" Feb 03 00:31:49 crc kubenswrapper[4798]: I0203 00:31:49.777582 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-meter-sg-core-configmap" Feb 03 00:31:49 crc kubenswrapper[4798]: I0203 00:31:49.777715 4798 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-dockercfg-8x42l" Feb 03 00:31:49 crc kubenswrapper[4798]: I0203 00:31:49.778824 4798 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-coll-meter-proxy-tls" Feb 03 00:31:49 crc kubenswrapper[4798]: I0203 00:31:49.788251 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh"] Feb 03 00:31:49 crc kubenswrapper[4798]: I0203 00:31:49.890541 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/71313990-6f87-41d6-ae1c-d42b159dbb8c-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh\" (UID: \"71313990-6f87-41d6-ae1c-d42b159dbb8c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh" Feb 03 00:31:49 crc kubenswrapper[4798]: I0203 00:31:49.890762 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/71313990-6f87-41d6-ae1c-d42b159dbb8c-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh\" (UID: \"71313990-6f87-41d6-ae1c-d42b159dbb8c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh" Feb 03 00:31:49 crc kubenswrapper[4798]: I0203 00:31:49.890967 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/71313990-6f87-41d6-ae1c-d42b159dbb8c-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh\" (UID: \"71313990-6f87-41d6-ae1c-d42b159dbb8c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh" Feb 03 00:31:49 crc kubenswrapper[4798]: I0203 00:31:49.891103 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/71313990-6f87-41d6-ae1c-d42b159dbb8c-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh\" (UID: \"71313990-6f87-41d6-ae1c-d42b159dbb8c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh" Feb 03 00:31:49 crc kubenswrapper[4798]: I0203 00:31:49.891171 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4kzs\" (UniqueName: \"kubernetes.io/projected/71313990-6f87-41d6-ae1c-d42b159dbb8c-kube-api-access-c4kzs\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh\" (UID: \"71313990-6f87-41d6-ae1c-d42b159dbb8c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh" Feb 03 00:31:49 crc kubenswrapper[4798]: I0203 00:31:49.992284 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/71313990-6f87-41d6-ae1c-d42b159dbb8c-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh\" (UID: \"71313990-6f87-41d6-ae1c-d42b159dbb8c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh" Feb 03 00:31:49 crc kubenswrapper[4798]: I0203 00:31:49.992358 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/71313990-6f87-41d6-ae1c-d42b159dbb8c-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh\" (UID: \"71313990-6f87-41d6-ae1c-d42b159dbb8c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh" Feb 03 00:31:49 crc kubenswrapper[4798]: I0203 00:31:49.992397 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/71313990-6f87-41d6-ae1c-d42b159dbb8c-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh\" (UID: \"71313990-6f87-41d6-ae1c-d42b159dbb8c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh" Feb 03 00:31:49 crc kubenswrapper[4798]: I0203 00:31:49.992428 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4kzs\" (UniqueName: \"kubernetes.io/projected/71313990-6f87-41d6-ae1c-d42b159dbb8c-kube-api-access-c4kzs\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh\" (UID: \"71313990-6f87-41d6-ae1c-d42b159dbb8c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh" Feb 03 00:31:49 crc kubenswrapper[4798]: E0203 00:31:49.992624 4798 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Feb 03 00:31:49 crc kubenswrapper[4798]: E0203 00:31:49.992762 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71313990-6f87-41d6-ae1c-d42b159dbb8c-default-cloud1-coll-meter-proxy-tls podName:71313990-6f87-41d6-ae1c-d42b159dbb8c nodeName:}" failed. No retries permitted until 2026-02-03 00:31:50.492738514 +0000 UTC m=+1002.258728545 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/71313990-6f87-41d6-ae1c-d42b159dbb8c-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh" (UID: "71313990-6f87-41d6-ae1c-d42b159dbb8c") : secret "default-cloud1-coll-meter-proxy-tls" not found Feb 03 00:31:49 crc kubenswrapper[4798]: I0203 00:31:49.992886 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/71313990-6f87-41d6-ae1c-d42b159dbb8c-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh\" (UID: \"71313990-6f87-41d6-ae1c-d42b159dbb8c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh" Feb 03 00:31:49 crc kubenswrapper[4798]: I0203 00:31:49.993083 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/71313990-6f87-41d6-ae1c-d42b159dbb8c-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh\" (UID: \"71313990-6f87-41d6-ae1c-d42b159dbb8c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh" Feb 03 00:31:49 crc kubenswrapper[4798]: I0203 00:31:49.994283 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/71313990-6f87-41d6-ae1c-d42b159dbb8c-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh\" (UID: \"71313990-6f87-41d6-ae1c-d42b159dbb8c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh" Feb 03 00:31:49 crc kubenswrapper[4798]: I0203 00:31:49.997704 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/71313990-6f87-41d6-ae1c-d42b159dbb8c-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh\" (UID: \"71313990-6f87-41d6-ae1c-d42b159dbb8c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh" Feb 03 00:31:50 crc kubenswrapper[4798]: I0203 00:31:50.015745 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4kzs\" (UniqueName: \"kubernetes.io/projected/71313990-6f87-41d6-ae1c-d42b159dbb8c-kube-api-access-c4kzs\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh\" (UID: \"71313990-6f87-41d6-ae1c-d42b159dbb8c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh" Feb 03 00:31:50 crc kubenswrapper[4798]: I0203 00:31:50.501334 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/71313990-6f87-41d6-ae1c-d42b159dbb8c-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh\" (UID: \"71313990-6f87-41d6-ae1c-d42b159dbb8c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh" Feb 03 00:31:50 crc kubenswrapper[4798]: E0203 00:31:50.501557 4798 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Feb 03 00:31:50 crc kubenswrapper[4798]: E0203 00:31:50.501674 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71313990-6f87-41d6-ae1c-d42b159dbb8c-default-cloud1-coll-meter-proxy-tls podName:71313990-6f87-41d6-ae1c-d42b159dbb8c nodeName:}" failed. No retries permitted until 2026-02-03 00:31:51.501633972 +0000 UTC m=+1003.267623993 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/71313990-6f87-41d6-ae1c-d42b159dbb8c-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh" (UID: "71313990-6f87-41d6-ae1c-d42b159dbb8c") : secret "default-cloud1-coll-meter-proxy-tls" not found Feb 03 00:31:51 crc kubenswrapper[4798]: I0203 00:31:51.056250 4798 generic.go:334] "Generic (PLEG): container finished" podID="2067d204-5ad9-43a2-9233-ee24c671e516" containerID="c3f6ad26f30eea33a36a182f754d84d877320a3e26b9ffe01c35cd0a9020e2d6" exitCode=0 Feb 03 00:31:51 crc kubenswrapper[4798]: I0203 00:31:51.056325 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"2067d204-5ad9-43a2-9233-ee24c671e516","Type":"ContainerDied","Data":"c3f6ad26f30eea33a36a182f754d84d877320a3e26b9ffe01c35cd0a9020e2d6"} Feb 03 00:31:51 crc kubenswrapper[4798]: I0203 00:31:51.059162 4798 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 00:31:51 crc kubenswrapper[4798]: I0203 00:31:51.060453 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"fb8978b7-2e43-4d75-a710-00e122a6f9a7","Type":"ContainerStarted","Data":"6dedee9c56c41a188a1b22bf2811fba0954be638405cbd9ab81e8758af672118"} Feb 03 00:31:51 crc kubenswrapper[4798]: I0203 00:31:51.522487 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/71313990-6f87-41d6-ae1c-d42b159dbb8c-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh\" (UID: \"71313990-6f87-41d6-ae1c-d42b159dbb8c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh" Feb 03 00:31:51 crc kubenswrapper[4798]: I0203 00:31:51.540095 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/71313990-6f87-41d6-ae1c-d42b159dbb8c-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh\" (UID: \"71313990-6f87-41d6-ae1c-d42b159dbb8c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh" Feb 03 00:31:51 crc kubenswrapper[4798]: I0203 00:31:51.588540 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh" Feb 03 00:31:51 crc kubenswrapper[4798]: I0203 00:31:51.844391 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt"] Feb 03 00:31:51 crc kubenswrapper[4798]: I0203 00:31:51.845627 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt" Feb 03 00:31:51 crc kubenswrapper[4798]: I0203 00:31:51.851771 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-meter-sg-core-configmap" Feb 03 00:31:51 crc kubenswrapper[4798]: I0203 00:31:51.851834 4798 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-ceil-meter-proxy-tls" Feb 03 00:31:51 crc kubenswrapper[4798]: I0203 00:31:51.854141 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt"] Feb 03 00:31:51 crc kubenswrapper[4798]: I0203 00:31:51.879427 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh"] Feb 03 00:31:51 crc kubenswrapper[4798]: W0203 00:31:51.883718 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71313990_6f87_41d6_ae1c_d42b159dbb8c.slice/crio-87adcdb5ed685d9da8a817988f0e062d92a857d3b48e5377775268eacde3a5e5 WatchSource:0}: Error finding container 87adcdb5ed685d9da8a817988f0e062d92a857d3b48e5377775268eacde3a5e5: Status 404 returned error can't find the container with id 87adcdb5ed685d9da8a817988f0e062d92a857d3b48e5377775268eacde3a5e5 Feb 03 00:31:51 crc kubenswrapper[4798]: I0203 00:31:51.928969 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/16e931c1-5d38-4fe7-8827-ea6cd99f3fb9-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt\" (UID: \"16e931c1-5d38-4fe7-8827-ea6cd99f3fb9\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt" Feb 03 00:31:51 crc kubenswrapper[4798]: I0203 00:31:51.929185 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/16e931c1-5d38-4fe7-8827-ea6cd99f3fb9-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt\" (UID: \"16e931c1-5d38-4fe7-8827-ea6cd99f3fb9\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt" Feb 03 00:31:51 crc kubenswrapper[4798]: I0203 00:31:51.929202 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpp7w\" (UniqueName: \"kubernetes.io/projected/16e931c1-5d38-4fe7-8827-ea6cd99f3fb9-kube-api-access-bpp7w\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt\" (UID: \"16e931c1-5d38-4fe7-8827-ea6cd99f3fb9\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt" Feb 03 00:31:51 crc kubenswrapper[4798]: I0203 00:31:51.929253 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/16e931c1-5d38-4fe7-8827-ea6cd99f3fb9-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt\" (UID: \"16e931c1-5d38-4fe7-8827-ea6cd99f3fb9\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt" Feb 03 00:31:51 crc kubenswrapper[4798]: I0203 00:31:51.929349 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/16e931c1-5d38-4fe7-8827-ea6cd99f3fb9-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt\" (UID: \"16e931c1-5d38-4fe7-8827-ea6cd99f3fb9\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt" Feb 03 00:31:52 crc kubenswrapper[4798]: I0203 00:31:52.036142 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/16e931c1-5d38-4fe7-8827-ea6cd99f3fb9-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt\" (UID: \"16e931c1-5d38-4fe7-8827-ea6cd99f3fb9\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt" Feb 03 00:31:52 crc kubenswrapper[4798]: I0203 00:31:52.036200 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/16e931c1-5d38-4fe7-8827-ea6cd99f3fb9-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt\" (UID: \"16e931c1-5d38-4fe7-8827-ea6cd99f3fb9\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt" Feb 03 00:31:52 crc kubenswrapper[4798]: I0203 00:31:52.036231 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpp7w\" (UniqueName: \"kubernetes.io/projected/16e931c1-5d38-4fe7-8827-ea6cd99f3fb9-kube-api-access-bpp7w\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt\" (UID: \"16e931c1-5d38-4fe7-8827-ea6cd99f3fb9\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt" Feb 03 00:31:52 crc kubenswrapper[4798]: I0203 00:31:52.036336 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/16e931c1-5d38-4fe7-8827-ea6cd99f3fb9-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt\" (UID: \"16e931c1-5d38-4fe7-8827-ea6cd99f3fb9\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt" Feb 03 00:31:52 crc kubenswrapper[4798]: I0203 00:31:52.036394 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/16e931c1-5d38-4fe7-8827-ea6cd99f3fb9-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt\" (UID: \"16e931c1-5d38-4fe7-8827-ea6cd99f3fb9\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt" Feb 03 00:31:52 crc kubenswrapper[4798]: E0203 00:31:52.036513 4798 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 03 00:31:52 crc kubenswrapper[4798]: E0203 00:31:52.036575 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16e931c1-5d38-4fe7-8827-ea6cd99f3fb9-default-cloud1-ceil-meter-proxy-tls podName:16e931c1-5d38-4fe7-8827-ea6cd99f3fb9 nodeName:}" failed. No retries permitted until 2026-02-03 00:31:52.536552624 +0000 UTC m=+1004.302542635 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/16e931c1-5d38-4fe7-8827-ea6cd99f3fb9-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt" (UID: "16e931c1-5d38-4fe7-8827-ea6cd99f3fb9") : secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 03 00:31:52 crc kubenswrapper[4798]: I0203 00:31:52.036907 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/16e931c1-5d38-4fe7-8827-ea6cd99f3fb9-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt\" (UID: \"16e931c1-5d38-4fe7-8827-ea6cd99f3fb9\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt" Feb 03 00:31:52 crc kubenswrapper[4798]: I0203 00:31:52.037273 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/16e931c1-5d38-4fe7-8827-ea6cd99f3fb9-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt\" (UID: \"16e931c1-5d38-4fe7-8827-ea6cd99f3fb9\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt" Feb 03 00:31:52 crc kubenswrapper[4798]: I0203 00:31:52.057299 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpp7w\" (UniqueName: \"kubernetes.io/projected/16e931c1-5d38-4fe7-8827-ea6cd99f3fb9-kube-api-access-bpp7w\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt\" (UID: \"16e931c1-5d38-4fe7-8827-ea6cd99f3fb9\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt" Feb 03 00:31:52 crc kubenswrapper[4798]: I0203 00:31:52.057842 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/16e931c1-5d38-4fe7-8827-ea6cd99f3fb9-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt\" (UID: \"16e931c1-5d38-4fe7-8827-ea6cd99f3fb9\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt" Feb 03 00:31:52 crc kubenswrapper[4798]: I0203 00:31:52.069649 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh" event={"ID":"71313990-6f87-41d6-ae1c-d42b159dbb8c","Type":"ContainerStarted","Data":"87adcdb5ed685d9da8a817988f0e062d92a857d3b48e5377775268eacde3a5e5"} Feb 03 00:31:52 crc kubenswrapper[4798]: I0203 00:31:52.544523 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/16e931c1-5d38-4fe7-8827-ea6cd99f3fb9-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt\" (UID: \"16e931c1-5d38-4fe7-8827-ea6cd99f3fb9\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt" Feb 03 00:31:52 crc kubenswrapper[4798]: E0203 00:31:52.544706 4798 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 03 00:31:52 crc kubenswrapper[4798]: E0203 00:31:52.544772 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16e931c1-5d38-4fe7-8827-ea6cd99f3fb9-default-cloud1-ceil-meter-proxy-tls podName:16e931c1-5d38-4fe7-8827-ea6cd99f3fb9 nodeName:}" failed. No retries permitted until 2026-02-03 00:31:53.544753013 +0000 UTC m=+1005.310743014 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/16e931c1-5d38-4fe7-8827-ea6cd99f3fb9-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt" (UID: "16e931c1-5d38-4fe7-8827-ea6cd99f3fb9") : secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 03 00:31:53 crc kubenswrapper[4798]: I0203 00:31:53.563272 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/16e931c1-5d38-4fe7-8827-ea6cd99f3fb9-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt\" (UID: \"16e931c1-5d38-4fe7-8827-ea6cd99f3fb9\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt" Feb 03 00:31:53 crc kubenswrapper[4798]: I0203 00:31:53.567433 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/16e931c1-5d38-4fe7-8827-ea6cd99f3fb9-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt\" (UID: \"16e931c1-5d38-4fe7-8827-ea6cd99f3fb9\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt" Feb 03 00:31:53 crc kubenswrapper[4798]: I0203 00:31:53.670966 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt" Feb 03 00:31:55 crc kubenswrapper[4798]: I0203 00:31:55.988996 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7"] Feb 03 00:31:55 crc kubenswrapper[4798]: I0203 00:31:55.990469 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7" Feb 03 00:31:55 crc kubenswrapper[4798]: I0203 00:31:55.994411 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-sens-meter-sg-core-configmap" Feb 03 00:31:56 crc kubenswrapper[4798]: I0203 00:31:56.012874 4798 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-sens-meter-proxy-tls" Feb 03 00:31:56 crc kubenswrapper[4798]: I0203 00:31:56.015873 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7"] Feb 03 00:31:56 crc kubenswrapper[4798]: I0203 00:31:56.107137 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2rgv\" (UniqueName: \"kubernetes.io/projected/9cbb39d9-224e-4ecb-b734-ca8c8652e01d-kube-api-access-m2rgv\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7\" (UID: \"9cbb39d9-224e-4ecb-b734-ca8c8652e01d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7" Feb 03 00:31:56 crc kubenswrapper[4798]: I0203 00:31:56.107210 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/9cbb39d9-224e-4ecb-b734-ca8c8652e01d-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7\" (UID: \"9cbb39d9-224e-4ecb-b734-ca8c8652e01d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7" Feb 03 00:31:56 crc kubenswrapper[4798]: I0203 00:31:56.107244 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/9cbb39d9-224e-4ecb-b734-ca8c8652e01d-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7\" (UID: \"9cbb39d9-224e-4ecb-b734-ca8c8652e01d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7" Feb 03 00:31:56 crc kubenswrapper[4798]: I0203 00:31:56.108676 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9cbb39d9-224e-4ecb-b734-ca8c8652e01d-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7\" (UID: \"9cbb39d9-224e-4ecb-b734-ca8c8652e01d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7" Feb 03 00:31:56 crc kubenswrapper[4798]: I0203 00:31:56.108746 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9cbb39d9-224e-4ecb-b734-ca8c8652e01d-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7\" (UID: \"9cbb39d9-224e-4ecb-b734-ca8c8652e01d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7" Feb 03 00:31:56 crc kubenswrapper[4798]: I0203 00:31:56.210510 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9cbb39d9-224e-4ecb-b734-ca8c8652e01d-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7\" (UID: \"9cbb39d9-224e-4ecb-b734-ca8c8652e01d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7" Feb 03 00:31:56 crc kubenswrapper[4798]: I0203 00:31:56.210569 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9cbb39d9-224e-4ecb-b734-ca8c8652e01d-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7\" (UID: \"9cbb39d9-224e-4ecb-b734-ca8c8652e01d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7" Feb 03 00:31:56 crc kubenswrapper[4798]: I0203 00:31:56.210666 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2rgv\" (UniqueName: \"kubernetes.io/projected/9cbb39d9-224e-4ecb-b734-ca8c8652e01d-kube-api-access-m2rgv\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7\" (UID: \"9cbb39d9-224e-4ecb-b734-ca8c8652e01d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7" Feb 03 00:31:56 crc kubenswrapper[4798]: I0203 00:31:56.210711 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/9cbb39d9-224e-4ecb-b734-ca8c8652e01d-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7\" (UID: \"9cbb39d9-224e-4ecb-b734-ca8c8652e01d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7" Feb 03 00:31:56 crc kubenswrapper[4798]: E0203 00:31:56.210727 4798 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Feb 03 00:31:56 crc kubenswrapper[4798]: E0203 00:31:56.210822 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9cbb39d9-224e-4ecb-b734-ca8c8652e01d-default-cloud1-sens-meter-proxy-tls podName:9cbb39d9-224e-4ecb-b734-ca8c8652e01d nodeName:}" failed. No retries permitted until 2026-02-03 00:31:56.710801677 +0000 UTC m=+1008.476791688 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/9cbb39d9-224e-4ecb-b734-ca8c8652e01d-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7" (UID: "9cbb39d9-224e-4ecb-b734-ca8c8652e01d") : secret "default-cloud1-sens-meter-proxy-tls" not found Feb 03 00:31:56 crc kubenswrapper[4798]: I0203 00:31:56.210742 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/9cbb39d9-224e-4ecb-b734-ca8c8652e01d-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7\" (UID: \"9cbb39d9-224e-4ecb-b734-ca8c8652e01d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7" Feb 03 00:31:56 crc kubenswrapper[4798]: I0203 00:31:56.211312 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9cbb39d9-224e-4ecb-b734-ca8c8652e01d-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7\" (UID: \"9cbb39d9-224e-4ecb-b734-ca8c8652e01d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7" Feb 03 00:31:56 crc kubenswrapper[4798]: I0203 00:31:56.211709 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/9cbb39d9-224e-4ecb-b734-ca8c8652e01d-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7\" (UID: \"9cbb39d9-224e-4ecb-b734-ca8c8652e01d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7" Feb 03 00:31:56 crc kubenswrapper[4798]: I0203 00:31:56.217198 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/9cbb39d9-224e-4ecb-b734-ca8c8652e01d-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7\" (UID: \"9cbb39d9-224e-4ecb-b734-ca8c8652e01d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7" Feb 03 00:31:56 crc kubenswrapper[4798]: I0203 00:31:56.228564 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2rgv\" (UniqueName: \"kubernetes.io/projected/9cbb39d9-224e-4ecb-b734-ca8c8652e01d-kube-api-access-m2rgv\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7\" (UID: \"9cbb39d9-224e-4ecb-b734-ca8c8652e01d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7" Feb 03 00:31:56 crc kubenswrapper[4798]: I0203 00:31:56.718968 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9cbb39d9-224e-4ecb-b734-ca8c8652e01d-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7\" (UID: \"9cbb39d9-224e-4ecb-b734-ca8c8652e01d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7" Feb 03 00:31:56 crc kubenswrapper[4798]: E0203 00:31:56.719193 4798 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Feb 03 00:31:56 crc kubenswrapper[4798]: E0203 00:31:56.719281 4798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9cbb39d9-224e-4ecb-b734-ca8c8652e01d-default-cloud1-sens-meter-proxy-tls podName:9cbb39d9-224e-4ecb-b734-ca8c8652e01d nodeName:}" failed. No retries permitted until 2026-02-03 00:31:57.719261754 +0000 UTC m=+1009.485251765 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/9cbb39d9-224e-4ecb-b734-ca8c8652e01d-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7" (UID: "9cbb39d9-224e-4ecb-b734-ca8c8652e01d") : secret "default-cloud1-sens-meter-proxy-tls" not found Feb 03 00:31:57 crc kubenswrapper[4798]: I0203 00:31:57.734246 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9cbb39d9-224e-4ecb-b734-ca8c8652e01d-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7\" (UID: \"9cbb39d9-224e-4ecb-b734-ca8c8652e01d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7" Feb 03 00:31:57 crc kubenswrapper[4798]: I0203 00:31:57.740469 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9cbb39d9-224e-4ecb-b734-ca8c8652e01d-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7\" (UID: \"9cbb39d9-224e-4ecb-b734-ca8c8652e01d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7" Feb 03 00:31:57 crc kubenswrapper[4798]: I0203 00:31:57.842725 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7" Feb 03 00:31:59 crc kubenswrapper[4798]: I0203 00:31:59.323890 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt"] Feb 03 00:31:59 crc kubenswrapper[4798]: I0203 00:31:59.403354 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7"] Feb 03 00:32:00 crc kubenswrapper[4798]: I0203 00:32:00.173708 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"fb8978b7-2e43-4d75-a710-00e122a6f9a7","Type":"ContainerStarted","Data":"55ac51ca53565c68924319c6689cb3c3004ddefa27028e1c9c967c27b9a80007"} Feb 03 00:32:00 crc kubenswrapper[4798]: I0203 00:32:00.178506 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt" event={"ID":"16e931c1-5d38-4fe7-8827-ea6cd99f3fb9","Type":"ContainerStarted","Data":"8d2ee1de143700f6aab913959044d7b10f292428c9c5a4e7b8feeb3f6e82b8ba"} Feb 03 00:32:00 crc kubenswrapper[4798]: I0203 00:32:00.178544 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt" event={"ID":"16e931c1-5d38-4fe7-8827-ea6cd99f3fb9","Type":"ContainerStarted","Data":"3a426c775f34c886825539ac5813da61ce9f9efa61d9f7fb71693f5c04284e26"} Feb 03 00:32:00 crc kubenswrapper[4798]: I0203 00:32:00.179523 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7" event={"ID":"9cbb39d9-224e-4ecb-b734-ca8c8652e01d","Type":"ContainerStarted","Data":"26d6cc47268665ef7ddb5f4fc648af60447f76333ebf300092fabd382d150933"} Feb 03 00:32:00 crc kubenswrapper[4798]: I0203 00:32:00.180962 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"2067d204-5ad9-43a2-9233-ee24c671e516","Type":"ContainerStarted","Data":"ee2e19c295fcea03cc97dfe38a222a3565d26d9aaa7657387163ede6133e67d8"} Feb 03 00:32:00 crc kubenswrapper[4798]: I0203 00:32:00.182676 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh" event={"ID":"71313990-6f87-41d6-ae1c-d42b159dbb8c","Type":"ContainerStarted","Data":"3931112d63af55119784ec121099f71d92beed5d69e7d4448f4fd06e92ad408b"} Feb 03 00:32:00 crc kubenswrapper[4798]: I0203 00:32:00.206625 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=4.913475947 podStartE2EDuration="40.206607183s" podCreationTimestamp="2026-02-03 00:31:20 +0000 UTC" firstStartedPulling="2026-02-03 00:31:23.993419183 +0000 UTC m=+975.759409204" lastFinishedPulling="2026-02-03 00:31:59.286550429 +0000 UTC m=+1011.052540440" observedRunningTime="2026-02-03 00:32:00.194111796 +0000 UTC m=+1011.960101807" watchObservedRunningTime="2026-02-03 00:32:00.206607183 +0000 UTC m=+1011.972597194" Feb 03 00:32:01 crc kubenswrapper[4798]: I0203 00:32:01.190190 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7" event={"ID":"9cbb39d9-224e-4ecb-b734-ca8c8652e01d","Type":"ContainerStarted","Data":"a788ded52f6121ca90cefb259e2a99da244c84aef1e580bde053e916f2018114"} Feb 03 00:32:02 crc kubenswrapper[4798]: I0203 00:32:02.198872 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"2067d204-5ad9-43a2-9233-ee24c671e516","Type":"ContainerStarted","Data":"de357be45f3a4bb5b395ba8f06ba39da5838b0ac2db29f401e66da24fca36f9d"} Feb 03 00:32:03 crc kubenswrapper[4798]: I0203 00:32:03.573923 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/prometheus-default-0" Feb 03 00:32:03 crc kubenswrapper[4798]: I0203 00:32:03.927124 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn"] Feb 03 00:32:03 crc kubenswrapper[4798]: I0203 00:32:03.928370 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn" Feb 03 00:32:03 crc kubenswrapper[4798]: I0203 00:32:03.934153 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-event-sg-core-configmap" Feb 03 00:32:03 crc kubenswrapper[4798]: I0203 00:32:03.934351 4798 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-cert" Feb 03 00:32:03 crc kubenswrapper[4798]: I0203 00:32:03.939267 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn"] Feb 03 00:32:04 crc kubenswrapper[4798]: I0203 00:32:04.052188 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/c88e7ac1-9440-4e1c-9140-09246e2588ce-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn\" (UID: \"c88e7ac1-9440-4e1c-9140-09246e2588ce\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn" Feb 03 00:32:04 crc kubenswrapper[4798]: I0203 00:32:04.052599 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbv82\" (UniqueName: \"kubernetes.io/projected/c88e7ac1-9440-4e1c-9140-09246e2588ce-kube-api-access-qbv82\") pod \"default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn\" (UID: \"c88e7ac1-9440-4e1c-9140-09246e2588ce\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn" Feb 03 00:32:04 crc kubenswrapper[4798]: I0203 00:32:04.052636 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/c88e7ac1-9440-4e1c-9140-09246e2588ce-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn\" (UID: \"c88e7ac1-9440-4e1c-9140-09246e2588ce\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn" Feb 03 00:32:04 crc kubenswrapper[4798]: I0203 00:32:04.052712 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/c88e7ac1-9440-4e1c-9140-09246e2588ce-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn\" (UID: \"c88e7ac1-9440-4e1c-9140-09246e2588ce\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn" Feb 03 00:32:04 crc kubenswrapper[4798]: I0203 00:32:04.154027 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbv82\" (UniqueName: \"kubernetes.io/projected/c88e7ac1-9440-4e1c-9140-09246e2588ce-kube-api-access-qbv82\") pod \"default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn\" (UID: \"c88e7ac1-9440-4e1c-9140-09246e2588ce\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn" Feb 03 00:32:04 crc kubenswrapper[4798]: I0203 00:32:04.154078 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/c88e7ac1-9440-4e1c-9140-09246e2588ce-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn\" (UID: \"c88e7ac1-9440-4e1c-9140-09246e2588ce\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn" Feb 03 00:32:04 crc kubenswrapper[4798]: I0203 00:32:04.154133 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/c88e7ac1-9440-4e1c-9140-09246e2588ce-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn\" (UID: \"c88e7ac1-9440-4e1c-9140-09246e2588ce\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn" Feb 03 00:32:04 crc kubenswrapper[4798]: I0203 00:32:04.154182 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/c88e7ac1-9440-4e1c-9140-09246e2588ce-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn\" (UID: \"c88e7ac1-9440-4e1c-9140-09246e2588ce\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn" Feb 03 00:32:04 crc kubenswrapper[4798]: I0203 00:32:04.156280 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/c88e7ac1-9440-4e1c-9140-09246e2588ce-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn\" (UID: \"c88e7ac1-9440-4e1c-9140-09246e2588ce\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn" Feb 03 00:32:04 crc kubenswrapper[4798]: I0203 00:32:04.157237 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/c88e7ac1-9440-4e1c-9140-09246e2588ce-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn\" (UID: \"c88e7ac1-9440-4e1c-9140-09246e2588ce\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn" Feb 03 00:32:04 crc kubenswrapper[4798]: I0203 00:32:04.172020 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/c88e7ac1-9440-4e1c-9140-09246e2588ce-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn\" (UID: \"c88e7ac1-9440-4e1c-9140-09246e2588ce\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn" Feb 03 00:32:04 crc kubenswrapper[4798]: I0203 00:32:04.174083 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbv82\" (UniqueName: \"kubernetes.io/projected/c88e7ac1-9440-4e1c-9140-09246e2588ce-kube-api-access-qbv82\") pod \"default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn\" (UID: \"c88e7ac1-9440-4e1c-9140-09246e2588ce\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn" Feb 03 00:32:04 crc kubenswrapper[4798]: I0203 00:32:04.265422 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn" Feb 03 00:32:05 crc kubenswrapper[4798]: I0203 00:32:05.217583 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-5d74b86695-k76jh"] Feb 03 00:32:05 crc kubenswrapper[4798]: I0203 00:32:05.218637 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5d74b86695-k76jh" Feb 03 00:32:05 crc kubenswrapper[4798]: I0203 00:32:05.226179 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-event-sg-core-configmap" Feb 03 00:32:05 crc kubenswrapper[4798]: I0203 00:32:05.238421 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-5d74b86695-k76jh"] Feb 03 00:32:05 crc kubenswrapper[4798]: I0203 00:32:05.389142 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjvst\" (UniqueName: \"kubernetes.io/projected/05cade84-cb26-4953-984f-4c9c376378b1-kube-api-access-mjvst\") pod \"default-cloud1-ceil-event-smartgateway-5d74b86695-k76jh\" (UID: \"05cade84-cb26-4953-984f-4c9c376378b1\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5d74b86695-k76jh" Feb 03 00:32:05 crc kubenswrapper[4798]: I0203 00:32:05.389217 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/05cade84-cb26-4953-984f-4c9c376378b1-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-5d74b86695-k76jh\" (UID: \"05cade84-cb26-4953-984f-4c9c376378b1\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5d74b86695-k76jh" Feb 03 00:32:05 crc kubenswrapper[4798]: I0203 00:32:05.389243 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/05cade84-cb26-4953-984f-4c9c376378b1-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-5d74b86695-k76jh\" (UID: \"05cade84-cb26-4953-984f-4c9c376378b1\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5d74b86695-k76jh" Feb 03 00:32:05 crc kubenswrapper[4798]: I0203 00:32:05.389417 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/05cade84-cb26-4953-984f-4c9c376378b1-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-5d74b86695-k76jh\" (UID: \"05cade84-cb26-4953-984f-4c9c376378b1\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5d74b86695-k76jh" Feb 03 00:32:05 crc kubenswrapper[4798]: I0203 00:32:05.491460 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/05cade84-cb26-4953-984f-4c9c376378b1-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-5d74b86695-k76jh\" (UID: \"05cade84-cb26-4953-984f-4c9c376378b1\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5d74b86695-k76jh" Feb 03 00:32:05 crc kubenswrapper[4798]: I0203 00:32:05.491525 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/05cade84-cb26-4953-984f-4c9c376378b1-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-5d74b86695-k76jh\" (UID: \"05cade84-cb26-4953-984f-4c9c376378b1\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5d74b86695-k76jh" Feb 03 00:32:05 crc kubenswrapper[4798]: I0203 00:32:05.491562 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/05cade84-cb26-4953-984f-4c9c376378b1-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-5d74b86695-k76jh\" (UID: \"05cade84-cb26-4953-984f-4c9c376378b1\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5d74b86695-k76jh" Feb 03 00:32:05 crc kubenswrapper[4798]: I0203 00:32:05.491669 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjvst\" (UniqueName: \"kubernetes.io/projected/05cade84-cb26-4953-984f-4c9c376378b1-kube-api-access-mjvst\") pod \"default-cloud1-ceil-event-smartgateway-5d74b86695-k76jh\" (UID: \"05cade84-cb26-4953-984f-4c9c376378b1\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5d74b86695-k76jh" Feb 03 00:32:05 crc kubenswrapper[4798]: I0203 00:32:05.492183 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/05cade84-cb26-4953-984f-4c9c376378b1-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-5d74b86695-k76jh\" (UID: \"05cade84-cb26-4953-984f-4c9c376378b1\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5d74b86695-k76jh" Feb 03 00:32:05 crc kubenswrapper[4798]: I0203 00:32:05.493262 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/05cade84-cb26-4953-984f-4c9c376378b1-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-5d74b86695-k76jh\" (UID: \"05cade84-cb26-4953-984f-4c9c376378b1\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5d74b86695-k76jh" Feb 03 00:32:05 crc kubenswrapper[4798]: I0203 00:32:05.497497 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/05cade84-cb26-4953-984f-4c9c376378b1-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-5d74b86695-k76jh\" (UID: \"05cade84-cb26-4953-984f-4c9c376378b1\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5d74b86695-k76jh" Feb 03 00:32:05 crc kubenswrapper[4798]: I0203 00:32:05.523531 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjvst\" (UniqueName: \"kubernetes.io/projected/05cade84-cb26-4953-984f-4c9c376378b1-kube-api-access-mjvst\") pod \"default-cloud1-ceil-event-smartgateway-5d74b86695-k76jh\" (UID: \"05cade84-cb26-4953-984f-4c9c376378b1\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5d74b86695-k76jh" Feb 03 00:32:05 crc kubenswrapper[4798]: I0203 00:32:05.545882 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5d74b86695-k76jh" Feb 03 00:32:08 crc kubenswrapper[4798]: I0203 00:32:08.573569 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Feb 03 00:32:08 crc kubenswrapper[4798]: I0203 00:32:08.621698 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Feb 03 00:32:09 crc kubenswrapper[4798]: I0203 00:32:09.248327 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn"] Feb 03 00:32:09 crc kubenswrapper[4798]: I0203 00:32:09.280697 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"2067d204-5ad9-43a2-9233-ee24c671e516","Type":"ContainerStarted","Data":"0ba079dba9d8d7f6de79826316d2ec76b8a52eb7dc7207f02bd8e5ae9d9a5e2a"} Feb 03 00:32:09 crc kubenswrapper[4798]: I0203 00:32:09.282917 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn" event={"ID":"c88e7ac1-9440-4e1c-9140-09246e2588ce","Type":"ContainerStarted","Data":"8c6dad08f5fee0a10ec97ddda151f9091dfa46bbb4e7626cbd9feb44d56fb302"} Feb 03 00:32:09 crc kubenswrapper[4798]: I0203 00:32:09.312991 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=23.660626208 podStartE2EDuration="35.312947702s" podCreationTimestamp="2026-02-03 00:31:34 +0000 UTC" firstStartedPulling="2026-02-03 00:31:51.05881791 +0000 UTC m=+1002.824807921" lastFinishedPulling="2026-02-03 00:32:02.711139404 +0000 UTC m=+1014.477129415" observedRunningTime="2026-02-03 00:32:09.308380071 +0000 UTC m=+1021.074370082" watchObservedRunningTime="2026-02-03 00:32:09.312947702 +0000 UTC m=+1021.078937723" Feb 03 00:32:09 crc kubenswrapper[4798]: I0203 00:32:09.340134 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Feb 03 00:32:09 crc kubenswrapper[4798]: I0203 00:32:09.430942 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-5d74b86695-k76jh"] Feb 03 00:32:10 crc kubenswrapper[4798]: I0203 00:32:10.293287 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7" event={"ID":"9cbb39d9-224e-4ecb-b734-ca8c8652e01d","Type":"ContainerStarted","Data":"9a3f087195fe4ec65f8e4afe36589e91ab61037a0b401e1f5db6c559c949835a"} Feb 03 00:32:10 crc kubenswrapper[4798]: I0203 00:32:10.297612 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh" event={"ID":"71313990-6f87-41d6-ae1c-d42b159dbb8c","Type":"ContainerStarted","Data":"49a7f044216ea0a26a2cb9d5e954bc932911fa2b38496585aecd144ac9774302"} Feb 03 00:32:10 crc kubenswrapper[4798]: I0203 00:32:10.302146 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt" event={"ID":"16e931c1-5d38-4fe7-8827-ea6cd99f3fb9","Type":"ContainerStarted","Data":"dc69a0b6e5da00b28b2d7c70af6f0d118dd442669c4e52349ac1b2b39472b81a"} Feb 03 00:32:10 crc kubenswrapper[4798]: I0203 00:32:10.306623 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn" event={"ID":"c88e7ac1-9440-4e1c-9140-09246e2588ce","Type":"ContainerStarted","Data":"29cd18bdea9d3a6e3478968795c7046538f4dd97a9336de2dc505fe9f92bc5fa"} Feb 03 00:32:10 crc kubenswrapper[4798]: I0203 00:32:10.310436 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5d74b86695-k76jh" event={"ID":"05cade84-cb26-4953-984f-4c9c376378b1","Type":"ContainerStarted","Data":"b06eb0294981369cd8530418f47be0fda0c55dcf45fae2d83af7b9189bd2ac01"} Feb 03 00:32:11 crc kubenswrapper[4798]: I0203 00:32:11.328680 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5d74b86695-k76jh" event={"ID":"05cade84-cb26-4953-984f-4c9c376378b1","Type":"ContainerStarted","Data":"81aa4df94554bf295f9a7c71dde80483f09454a9cc9565ab270228586097cfee"} Feb 03 00:32:13 crc kubenswrapper[4798]: I0203 00:32:13.867110 4798 patch_prober.go:28] interesting pod/machine-config-daemon-b842j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 00:32:13 crc kubenswrapper[4798]: I0203 00:32:13.867503 4798 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b842j" podUID="c6602c86-f236-4772-b70f-a8b4847b95dd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 00:32:26 crc kubenswrapper[4798]: I0203 00:32:26.934737 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-gm9dt"] Feb 03 00:32:26 crc kubenswrapper[4798]: I0203 00:32:26.935437 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/default-interconnect-68864d46cb-gm9dt" podUID="636c3318-c8fd-44c9-8501-544006efaddc" containerName="default-interconnect" containerID="cri-o://831d2ff9c602e05156ffa8ae614b55629ddb3e34bac19abfa8eafe1fa3a386bf" gracePeriod=30 Feb 03 00:32:27 crc kubenswrapper[4798]: E0203 00:32:27.862052 4798 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/infrawatch/sg-core:latest" Feb 03 00:32:27 crc kubenswrapper[4798]: E0203 00:32:27.862584 4798 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:sg-core,Image:quay.io/infrawatch/sg-core:latest,Command:[],Args:[-config /etc/sg-core/sg-core.conf.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:prom-https,HostPort:0,ContainerPort:8083,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:MY_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:sg-core-config,ReadOnly:true,MountPath:/etc/sg-core/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bpp7w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt_service-telemetry(16e931c1-5d38-4fe7-8827-ea6cd99f3fb9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 00:32:27 crc kubenswrapper[4798]: E0203 00:32:27.863861 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"sg-core\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt" podUID="16e931c1-5d38-4fe7-8827-ea6cd99f3fb9" Feb 03 00:32:28 crc kubenswrapper[4798]: E0203 00:32:28.025723 4798 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/infrawatch/sg-core:latest" Feb 03 00:32:28 crc kubenswrapper[4798]: E0203 00:32:28.025943 4798 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:sg-core,Image:quay.io/infrawatch/sg-core:latest,Command:[],Args:[-config /etc/sg-core/sg-core.conf.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:prom-https,HostPort:0,ContainerPort:8083,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:MY_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:sg-core-config,ReadOnly:true,MountPath:/etc/sg-core/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c4kzs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh_service-telemetry(71313990-6f87-41d6-ae1c-d42b159dbb8c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 00:32:28 crc kubenswrapper[4798]: E0203 00:32:28.027123 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"sg-core\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh" podUID="71313990-6f87-41d6-ae1c-d42b159dbb8c" Feb 03 00:32:28 crc kubenswrapper[4798]: E0203 00:32:28.075580 4798 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/infrawatch/sg-core:latest" Feb 03 00:32:28 crc kubenswrapper[4798]: E0203 00:32:28.075797 4798 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:sg-core,Image:quay.io/infrawatch/sg-core:latest,Command:[],Args:[-config /etc/sg-core/sg-core.conf.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:prom-https,HostPort:0,ContainerPort:8083,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:MY_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:sg-core-config,ReadOnly:true,MountPath:/etc/sg-core/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m2rgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7_service-telemetry(9cbb39d9-224e-4ecb-b734-ca8c8652e01d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 00:32:28 crc kubenswrapper[4798]: E0203 00:32:28.077005 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"sg-core\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7" podUID="9cbb39d9-224e-4ecb-b734-ca8c8652e01d" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.216638 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-gm9dt" Feb 03 00:32:28 crc kubenswrapper[4798]: E0203 00:32:28.232398 4798 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/infrawatch/sg-core:latest" Feb 03 00:32:28 crc kubenswrapper[4798]: E0203 00:32:28.232573 4798 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:sg-core,Image:quay.io/infrawatch/sg-core:latest,Command:[],Args:[-config /etc/sg-core/sg-core.conf.yaml],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:MY_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:sg-core-config,ReadOnly:true,MountPath:/etc/sg-core/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-certs,ReadOnly:false,MountPath:/config/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qbv82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn_service-telemetry(c88e7ac1-9440-4e1c-9140-09246e2588ce): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 00:32:28 crc kubenswrapper[4798]: E0203 00:32:28.234324 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"sg-core\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn" podUID="c88e7ac1-9440-4e1c-9140-09246e2588ce" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.255489 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-f6v48"] Feb 03 00:32:28 crc kubenswrapper[4798]: E0203 00:32:28.255733 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="636c3318-c8fd-44c9-8501-544006efaddc" containerName="default-interconnect" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.255755 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="636c3318-c8fd-44c9-8501-544006efaddc" containerName="default-interconnect" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.255877 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="636c3318-c8fd-44c9-8501-544006efaddc" containerName="default-interconnect" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.256286 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-f6v48" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.274941 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-f6v48"] Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.291093 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/3a197696-3010-4eb8-8a9a-9cdf609ea136-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-f6v48\" (UID: \"3a197696-3010-4eb8-8a9a-9cdf609ea136\") " pod="service-telemetry/default-interconnect-68864d46cb-f6v48" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.291151 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/3a197696-3010-4eb8-8a9a-9cdf609ea136-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-f6v48\" (UID: \"3a197696-3010-4eb8-8a9a-9cdf609ea136\") " pod="service-telemetry/default-interconnect-68864d46cb-f6v48" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.291180 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/3a197696-3010-4eb8-8a9a-9cdf609ea136-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-f6v48\" (UID: \"3a197696-3010-4eb8-8a9a-9cdf609ea136\") " pod="service-telemetry/default-interconnect-68864d46cb-f6v48" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.291202 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/3a197696-3010-4eb8-8a9a-9cdf609ea136-sasl-users\") pod \"default-interconnect-68864d46cb-f6v48\" (UID: \"3a197696-3010-4eb8-8a9a-9cdf609ea136\") " pod="service-telemetry/default-interconnect-68864d46cb-f6v48" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.291217 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/3a197696-3010-4eb8-8a9a-9cdf609ea136-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-f6v48\" (UID: \"3a197696-3010-4eb8-8a9a-9cdf609ea136\") " pod="service-telemetry/default-interconnect-68864d46cb-f6v48" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.291250 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9qg9\" (UniqueName: \"kubernetes.io/projected/3a197696-3010-4eb8-8a9a-9cdf609ea136-kube-api-access-q9qg9\") pod \"default-interconnect-68864d46cb-f6v48\" (UID: \"3a197696-3010-4eb8-8a9a-9cdf609ea136\") " pod="service-telemetry/default-interconnect-68864d46cb-f6v48" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.291286 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/3a197696-3010-4eb8-8a9a-9cdf609ea136-sasl-config\") pod \"default-interconnect-68864d46cb-f6v48\" (UID: \"3a197696-3010-4eb8-8a9a-9cdf609ea136\") " pod="service-telemetry/default-interconnect-68864d46cb-f6v48" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.391963 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/636c3318-c8fd-44c9-8501-544006efaddc-default-interconnect-openstack-credentials\") pod \"636c3318-c8fd-44c9-8501-544006efaddc\" (UID: \"636c3318-c8fd-44c9-8501-544006efaddc\") " Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.392015 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/636c3318-c8fd-44c9-8501-544006efaddc-sasl-users\") pod \"636c3318-c8fd-44c9-8501-544006efaddc\" (UID: \"636c3318-c8fd-44c9-8501-544006efaddc\") " Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.392032 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/636c3318-c8fd-44c9-8501-544006efaddc-default-interconnect-inter-router-ca\") pod \"636c3318-c8fd-44c9-8501-544006efaddc\" (UID: \"636c3318-c8fd-44c9-8501-544006efaddc\") " Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.392114 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/636c3318-c8fd-44c9-8501-544006efaddc-default-interconnect-openstack-ca\") pod \"636c3318-c8fd-44c9-8501-544006efaddc\" (UID: \"636c3318-c8fd-44c9-8501-544006efaddc\") " Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.392149 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/636c3318-c8fd-44c9-8501-544006efaddc-sasl-config\") pod \"636c3318-c8fd-44c9-8501-544006efaddc\" (UID: \"636c3318-c8fd-44c9-8501-544006efaddc\") " Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.392213 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/636c3318-c8fd-44c9-8501-544006efaddc-default-interconnect-inter-router-credentials\") pod \"636c3318-c8fd-44c9-8501-544006efaddc\" (UID: \"636c3318-c8fd-44c9-8501-544006efaddc\") " Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.392250 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7tkb\" (UniqueName: \"kubernetes.io/projected/636c3318-c8fd-44c9-8501-544006efaddc-kube-api-access-t7tkb\") pod \"636c3318-c8fd-44c9-8501-544006efaddc\" (UID: \"636c3318-c8fd-44c9-8501-544006efaddc\") " Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.392494 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/3a197696-3010-4eb8-8a9a-9cdf609ea136-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-f6v48\" (UID: \"3a197696-3010-4eb8-8a9a-9cdf609ea136\") " pod="service-telemetry/default-interconnect-68864d46cb-f6v48" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.392538 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/3a197696-3010-4eb8-8a9a-9cdf609ea136-sasl-users\") pod \"default-interconnect-68864d46cb-f6v48\" (UID: \"3a197696-3010-4eb8-8a9a-9cdf609ea136\") " pod="service-telemetry/default-interconnect-68864d46cb-f6v48" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.392562 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/3a197696-3010-4eb8-8a9a-9cdf609ea136-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-f6v48\" (UID: \"3a197696-3010-4eb8-8a9a-9cdf609ea136\") " pod="service-telemetry/default-interconnect-68864d46cb-f6v48" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.392607 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9qg9\" (UniqueName: \"kubernetes.io/projected/3a197696-3010-4eb8-8a9a-9cdf609ea136-kube-api-access-q9qg9\") pod \"default-interconnect-68864d46cb-f6v48\" (UID: \"3a197696-3010-4eb8-8a9a-9cdf609ea136\") " pod="service-telemetry/default-interconnect-68864d46cb-f6v48" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.392679 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/3a197696-3010-4eb8-8a9a-9cdf609ea136-sasl-config\") pod \"default-interconnect-68864d46cb-f6v48\" (UID: \"3a197696-3010-4eb8-8a9a-9cdf609ea136\") " pod="service-telemetry/default-interconnect-68864d46cb-f6v48" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.392716 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/3a197696-3010-4eb8-8a9a-9cdf609ea136-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-f6v48\" (UID: \"3a197696-3010-4eb8-8a9a-9cdf609ea136\") " pod="service-telemetry/default-interconnect-68864d46cb-f6v48" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.392761 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/3a197696-3010-4eb8-8a9a-9cdf609ea136-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-f6v48\" (UID: \"3a197696-3010-4eb8-8a9a-9cdf609ea136\") " pod="service-telemetry/default-interconnect-68864d46cb-f6v48" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.396069 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/636c3318-c8fd-44c9-8501-544006efaddc-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "636c3318-c8fd-44c9-8501-544006efaddc" (UID: "636c3318-c8fd-44c9-8501-544006efaddc"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.397517 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/3a197696-3010-4eb8-8a9a-9cdf609ea136-sasl-config\") pod \"default-interconnect-68864d46cb-f6v48\" (UID: \"3a197696-3010-4eb8-8a9a-9cdf609ea136\") " pod="service-telemetry/default-interconnect-68864d46cb-f6v48" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.399614 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/636c3318-c8fd-44c9-8501-544006efaddc-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "636c3318-c8fd-44c9-8501-544006efaddc" (UID: "636c3318-c8fd-44c9-8501-544006efaddc"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.399800 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/636c3318-c8fd-44c9-8501-544006efaddc-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "636c3318-c8fd-44c9-8501-544006efaddc" (UID: "636c3318-c8fd-44c9-8501-544006efaddc"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.400115 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/3a197696-3010-4eb8-8a9a-9cdf609ea136-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-f6v48\" (UID: \"3a197696-3010-4eb8-8a9a-9cdf609ea136\") " pod="service-telemetry/default-interconnect-68864d46cb-f6v48" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.400222 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/636c3318-c8fd-44c9-8501-544006efaddc-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "636c3318-c8fd-44c9-8501-544006efaddc" (UID: "636c3318-c8fd-44c9-8501-544006efaddc"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.401468 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/3a197696-3010-4eb8-8a9a-9cdf609ea136-sasl-users\") pod \"default-interconnect-68864d46cb-f6v48\" (UID: \"3a197696-3010-4eb8-8a9a-9cdf609ea136\") " pod="service-telemetry/default-interconnect-68864d46cb-f6v48" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.402241 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/636c3318-c8fd-44c9-8501-544006efaddc-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "636c3318-c8fd-44c9-8501-544006efaddc" (UID: "636c3318-c8fd-44c9-8501-544006efaddc"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.402340 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/636c3318-c8fd-44c9-8501-544006efaddc-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "636c3318-c8fd-44c9-8501-544006efaddc" (UID: "636c3318-c8fd-44c9-8501-544006efaddc"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.403594 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/3a197696-3010-4eb8-8a9a-9cdf609ea136-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-f6v48\" (UID: \"3a197696-3010-4eb8-8a9a-9cdf609ea136\") " pod="service-telemetry/default-interconnect-68864d46cb-f6v48" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.405833 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/3a197696-3010-4eb8-8a9a-9cdf609ea136-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-f6v48\" (UID: \"3a197696-3010-4eb8-8a9a-9cdf609ea136\") " pod="service-telemetry/default-interconnect-68864d46cb-f6v48" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.406061 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/636c3318-c8fd-44c9-8501-544006efaddc-kube-api-access-t7tkb" (OuterVolumeSpecName: "kube-api-access-t7tkb") pod "636c3318-c8fd-44c9-8501-544006efaddc" (UID: "636c3318-c8fd-44c9-8501-544006efaddc"). InnerVolumeSpecName "kube-api-access-t7tkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.411899 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/3a197696-3010-4eb8-8a9a-9cdf609ea136-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-f6v48\" (UID: \"3a197696-3010-4eb8-8a9a-9cdf609ea136\") " pod="service-telemetry/default-interconnect-68864d46cb-f6v48" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.416099 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9qg9\" (UniqueName: \"kubernetes.io/projected/3a197696-3010-4eb8-8a9a-9cdf609ea136-kube-api-access-q9qg9\") pod \"default-interconnect-68864d46cb-f6v48\" (UID: \"3a197696-3010-4eb8-8a9a-9cdf609ea136\") " pod="service-telemetry/default-interconnect-68864d46cb-f6v48" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.451339 4798 generic.go:334] "Generic (PLEG): container finished" podID="05cade84-cb26-4953-984f-4c9c376378b1" containerID="81aa4df94554bf295f9a7c71dde80483f09454a9cc9565ab270228586097cfee" exitCode=0 Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.451629 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5d74b86695-k76jh" event={"ID":"05cade84-cb26-4953-984f-4c9c376378b1","Type":"ContainerDied","Data":"81aa4df94554bf295f9a7c71dde80483f09454a9cc9565ab270228586097cfee"} Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.451739 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5d74b86695-k76jh" event={"ID":"05cade84-cb26-4953-984f-4c9c376378b1","Type":"ContainerStarted","Data":"b36fa27707044b1e30da2e42545511eea8b9308e21251e0e1fc63fb84d190fe1"} Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.451899 4798 scope.go:117] "RemoveContainer" containerID="81aa4df94554bf295f9a7c71dde80483f09454a9cc9565ab270228586097cfee" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.454051 4798 generic.go:334] "Generic (PLEG): container finished" podID="9cbb39d9-224e-4ecb-b734-ca8c8652e01d" containerID="9a3f087195fe4ec65f8e4afe36589e91ab61037a0b401e1f5db6c559c949835a" exitCode=0 Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.454085 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7" event={"ID":"9cbb39d9-224e-4ecb-b734-ca8c8652e01d","Type":"ContainerDied","Data":"9a3f087195fe4ec65f8e4afe36589e91ab61037a0b401e1f5db6c559c949835a"} Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.454963 4798 scope.go:117] "RemoveContainer" containerID="9a3f087195fe4ec65f8e4afe36589e91ab61037a0b401e1f5db6c559c949835a" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.457588 4798 generic.go:334] "Generic (PLEG): container finished" podID="636c3318-c8fd-44c9-8501-544006efaddc" containerID="831d2ff9c602e05156ffa8ae614b55629ddb3e34bac19abfa8eafe1fa3a386bf" exitCode=0 Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.457637 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-gm9dt" event={"ID":"636c3318-c8fd-44c9-8501-544006efaddc","Type":"ContainerDied","Data":"831d2ff9c602e05156ffa8ae614b55629ddb3e34bac19abfa8eafe1fa3a386bf"} Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.457673 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-gm9dt" event={"ID":"636c3318-c8fd-44c9-8501-544006efaddc","Type":"ContainerDied","Data":"109234d20bb46cd4bc9abb50f5fc1afb0e56ca741d4c63f788bd7dad884d28d7"} Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.457691 4798 scope.go:117] "RemoveContainer" containerID="831d2ff9c602e05156ffa8ae614b55629ddb3e34bac19abfa8eafe1fa3a386bf" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.457789 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-gm9dt" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.461508 4798 generic.go:334] "Generic (PLEG): container finished" podID="71313990-6f87-41d6-ae1c-d42b159dbb8c" containerID="49a7f044216ea0a26a2cb9d5e954bc932911fa2b38496585aecd144ac9774302" exitCode=0 Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.461590 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh" event={"ID":"71313990-6f87-41d6-ae1c-d42b159dbb8c","Type":"ContainerDied","Data":"49a7f044216ea0a26a2cb9d5e954bc932911fa2b38496585aecd144ac9774302"} Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.462021 4798 scope.go:117] "RemoveContainer" containerID="49a7f044216ea0a26a2cb9d5e954bc932911fa2b38496585aecd144ac9774302" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.468566 4798 generic.go:334] "Generic (PLEG): container finished" podID="16e931c1-5d38-4fe7-8827-ea6cd99f3fb9" containerID="dc69a0b6e5da00b28b2d7c70af6f0d118dd442669c4e52349ac1b2b39472b81a" exitCode=0 Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.468690 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt" event={"ID":"16e931c1-5d38-4fe7-8827-ea6cd99f3fb9","Type":"ContainerDied","Data":"dc69a0b6e5da00b28b2d7c70af6f0d118dd442669c4e52349ac1b2b39472b81a"} Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.469339 4798 scope.go:117] "RemoveContainer" containerID="dc69a0b6e5da00b28b2d7c70af6f0d118dd442669c4e52349ac1b2b39472b81a" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.477077 4798 generic.go:334] "Generic (PLEG): container finished" podID="c88e7ac1-9440-4e1c-9140-09246e2588ce" containerID="29cd18bdea9d3a6e3478968795c7046538f4dd97a9336de2dc505fe9f92bc5fa" exitCode=0 Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.477128 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn" event={"ID":"c88e7ac1-9440-4e1c-9140-09246e2588ce","Type":"ContainerDied","Data":"29cd18bdea9d3a6e3478968795c7046538f4dd97a9336de2dc505fe9f92bc5fa"} Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.477716 4798 scope.go:117] "RemoveContainer" containerID="29cd18bdea9d3a6e3478968795c7046538f4dd97a9336de2dc505fe9f92bc5fa" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.483663 4798 scope.go:117] "RemoveContainer" containerID="831d2ff9c602e05156ffa8ae614b55629ddb3e34bac19abfa8eafe1fa3a386bf" Feb 03 00:32:28 crc kubenswrapper[4798]: E0203 00:32:28.483975 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"831d2ff9c602e05156ffa8ae614b55629ddb3e34bac19abfa8eafe1fa3a386bf\": container with ID starting with 831d2ff9c602e05156ffa8ae614b55629ddb3e34bac19abfa8eafe1fa3a386bf not found: ID does not exist" containerID="831d2ff9c602e05156ffa8ae614b55629ddb3e34bac19abfa8eafe1fa3a386bf" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.484006 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"831d2ff9c602e05156ffa8ae614b55629ddb3e34bac19abfa8eafe1fa3a386bf"} err="failed to get container status \"831d2ff9c602e05156ffa8ae614b55629ddb3e34bac19abfa8eafe1fa3a386bf\": rpc error: code = NotFound desc = could not find container \"831d2ff9c602e05156ffa8ae614b55629ddb3e34bac19abfa8eafe1fa3a386bf\": container with ID starting with 831d2ff9c602e05156ffa8ae614b55629ddb3e34bac19abfa8eafe1fa3a386bf not found: ID does not exist" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.493897 4798 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/636c3318-c8fd-44c9-8501-544006efaddc-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.493948 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7tkb\" (UniqueName: \"kubernetes.io/projected/636c3318-c8fd-44c9-8501-544006efaddc-kube-api-access-t7tkb\") on node \"crc\" DevicePath \"\"" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.493967 4798 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/636c3318-c8fd-44c9-8501-544006efaddc-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.493978 4798 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/636c3318-c8fd-44c9-8501-544006efaddc-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.493990 4798 reconciler_common.go:293] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/636c3318-c8fd-44c9-8501-544006efaddc-sasl-users\") on node \"crc\" DevicePath \"\"" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.494004 4798 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/636c3318-c8fd-44c9-8501-544006efaddc-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.494015 4798 reconciler_common.go:293] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/636c3318-c8fd-44c9-8501-544006efaddc-sasl-config\") on node \"crc\" DevicePath \"\"" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.539394 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-gm9dt"] Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.558743 4798 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-gm9dt"] Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.572062 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-f6v48" Feb 03 00:32:28 crc kubenswrapper[4798]: I0203 00:32:28.915343 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="636c3318-c8fd-44c9-8501-544006efaddc" path="/var/lib/kubelet/pods/636c3318-c8fd-44c9-8501-544006efaddc/volumes" Feb 03 00:32:29 crc kubenswrapper[4798]: I0203 00:32:29.122965 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-f6v48"] Feb 03 00:32:29 crc kubenswrapper[4798]: W0203 00:32:29.136383 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a197696_3010_4eb8_8a9a_9cdf609ea136.slice/crio-2733dc7cbea01a4e2dce62bd10e17edc9b5d10c248ce714b1c60e81df09a7ce6 WatchSource:0}: Error finding container 2733dc7cbea01a4e2dce62bd10e17edc9b5d10c248ce714b1c60e81df09a7ce6: Status 404 returned error can't find the container with id 2733dc7cbea01a4e2dce62bd10e17edc9b5d10c248ce714b1c60e81df09a7ce6 Feb 03 00:32:29 crc kubenswrapper[4798]: E0203 00:32:29.306950 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"sg-core\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/sg-core:latest\\\"\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt" podUID="16e931c1-5d38-4fe7-8827-ea6cd99f3fb9" Feb 03 00:32:29 crc kubenswrapper[4798]: E0203 00:32:29.307039 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"sg-core\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/sg-core:latest\\\"\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh" podUID="71313990-6f87-41d6-ae1c-d42b159dbb8c" Feb 03 00:32:29 crc kubenswrapper[4798]: E0203 00:32:29.412020 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"sg-core\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/sg-core:latest\\\"\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7" podUID="9cbb39d9-224e-4ecb-b734-ca8c8652e01d" Feb 03 00:32:29 crc kubenswrapper[4798]: E0203 00:32:29.413181 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"sg-core\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/sg-core:latest\\\"\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn" podUID="c88e7ac1-9440-4e1c-9140-09246e2588ce" Feb 03 00:32:29 crc kubenswrapper[4798]: I0203 00:32:29.493108 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn" event={"ID":"c88e7ac1-9440-4e1c-9140-09246e2588ce","Type":"ContainerStarted","Data":"8c4d4026b55813144c9afc3a0bd45519e88b9150bb7ee4b7b66b8185d64e4b98"} Feb 03 00:32:29 crc kubenswrapper[4798]: E0203 00:32:29.495185 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"sg-core\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/sg-core:latest\\\"\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn" podUID="c88e7ac1-9440-4e1c-9140-09246e2588ce" Feb 03 00:32:29 crc kubenswrapper[4798]: I0203 00:32:29.495239 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5d74b86695-k76jh" event={"ID":"05cade84-cb26-4953-984f-4c9c376378b1","Type":"ContainerStarted","Data":"e27053adb8fe85ef0483a3b12516ed299362289e7bf1bd62ed4746d31dac3ba7"} Feb 03 00:32:29 crc kubenswrapper[4798]: I0203 00:32:29.497811 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7" event={"ID":"9cbb39d9-224e-4ecb-b734-ca8c8652e01d","Type":"ContainerStarted","Data":"5b05582debabe54aae5e6904d346ad535ba990a376cd31c55db88f0e2cd70c18"} Feb 03 00:32:29 crc kubenswrapper[4798]: E0203 00:32:29.499813 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"sg-core\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/sg-core:latest\\\"\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7" podUID="9cbb39d9-224e-4ecb-b734-ca8c8652e01d" Feb 03 00:32:29 crc kubenswrapper[4798]: I0203 00:32:29.502237 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh" event={"ID":"71313990-6f87-41d6-ae1c-d42b159dbb8c","Type":"ContainerStarted","Data":"ca404437d865302eccc381383a5439cae7b2c65150008498417b0f21330135ad"} Feb 03 00:32:29 crc kubenswrapper[4798]: E0203 00:32:29.504481 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"sg-core\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/sg-core:latest\\\"\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh" podUID="71313990-6f87-41d6-ae1c-d42b159dbb8c" Feb 03 00:32:29 crc kubenswrapper[4798]: I0203 00:32:29.514389 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-f6v48" event={"ID":"3a197696-3010-4eb8-8a9a-9cdf609ea136","Type":"ContainerStarted","Data":"8fe7a5fc77889771f9a101e5d99e9565ce4b11e24f28f9aff9005521fd1d2efd"} Feb 03 00:32:29 crc kubenswrapper[4798]: I0203 00:32:29.514434 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-f6v48" event={"ID":"3a197696-3010-4eb8-8a9a-9cdf609ea136","Type":"ContainerStarted","Data":"2733dc7cbea01a4e2dce62bd10e17edc9b5d10c248ce714b1c60e81df09a7ce6"} Feb 03 00:32:29 crc kubenswrapper[4798]: I0203 00:32:29.520621 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt" event={"ID":"16e931c1-5d38-4fe7-8827-ea6cd99f3fb9","Type":"ContainerStarted","Data":"09fb640177f3bfe53802f48ad95b981587bb00533c23a6651af812446c38728d"} Feb 03 00:32:29 crc kubenswrapper[4798]: E0203 00:32:29.525297 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"sg-core\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/sg-core:latest\\\"\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt" podUID="16e931c1-5d38-4fe7-8827-ea6cd99f3fb9" Feb 03 00:32:29 crc kubenswrapper[4798]: I0203 00:32:29.572488 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5d74b86695-k76jh" podStartSLOduration=5.031548291 podStartE2EDuration="24.572465989s" podCreationTimestamp="2026-02-03 00:32:05 +0000 UTC" firstStartedPulling="2026-02-03 00:32:09.448085262 +0000 UTC m=+1021.214075273" lastFinishedPulling="2026-02-03 00:32:28.98900296 +0000 UTC m=+1040.754992971" observedRunningTime="2026-02-03 00:32:29.571030494 +0000 UTC m=+1041.337020525" watchObservedRunningTime="2026-02-03 00:32:29.572465989 +0000 UTC m=+1041.338456000" Feb 03 00:32:29 crc kubenswrapper[4798]: I0203 00:32:29.597024 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-68864d46cb-f6v48" podStartSLOduration=3.596993701 podStartE2EDuration="3.596993701s" podCreationTimestamp="2026-02-03 00:32:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 00:32:29.591346004 +0000 UTC m=+1041.357336045" watchObservedRunningTime="2026-02-03 00:32:29.596993701 +0000 UTC m=+1041.362983712" Feb 03 00:32:30 crc kubenswrapper[4798]: I0203 00:32:30.532907 4798 generic.go:334] "Generic (PLEG): container finished" podID="05cade84-cb26-4953-984f-4c9c376378b1" containerID="e27053adb8fe85ef0483a3b12516ed299362289e7bf1bd62ed4746d31dac3ba7" exitCode=0 Feb 03 00:32:30 crc kubenswrapper[4798]: I0203 00:32:30.532975 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5d74b86695-k76jh" event={"ID":"05cade84-cb26-4953-984f-4c9c376378b1","Type":"ContainerDied","Data":"e27053adb8fe85ef0483a3b12516ed299362289e7bf1bd62ed4746d31dac3ba7"} Feb 03 00:32:30 crc kubenswrapper[4798]: I0203 00:32:30.533370 4798 scope.go:117] "RemoveContainer" containerID="81aa4df94554bf295f9a7c71dde80483f09454a9cc9565ab270228586097cfee" Feb 03 00:32:30 crc kubenswrapper[4798]: I0203 00:32:30.533444 4798 scope.go:117] "RemoveContainer" containerID="e27053adb8fe85ef0483a3b12516ed299362289e7bf1bd62ed4746d31dac3ba7" Feb 03 00:32:30 crc kubenswrapper[4798]: E0203 00:32:30.533640 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-5d74b86695-k76jh_service-telemetry(05cade84-cb26-4953-984f-4c9c376378b1)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5d74b86695-k76jh" podUID="05cade84-cb26-4953-984f-4c9c376378b1" Feb 03 00:32:30 crc kubenswrapper[4798]: I0203 00:32:30.539911 4798 generic.go:334] "Generic (PLEG): container finished" podID="9cbb39d9-224e-4ecb-b734-ca8c8652e01d" containerID="5b05582debabe54aae5e6904d346ad535ba990a376cd31c55db88f0e2cd70c18" exitCode=0 Feb 03 00:32:30 crc kubenswrapper[4798]: I0203 00:32:30.539991 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7" event={"ID":"9cbb39d9-224e-4ecb-b734-ca8c8652e01d","Type":"ContainerDied","Data":"5b05582debabe54aae5e6904d346ad535ba990a376cd31c55db88f0e2cd70c18"} Feb 03 00:32:30 crc kubenswrapper[4798]: I0203 00:32:30.540577 4798 scope.go:117] "RemoveContainer" containerID="5b05582debabe54aae5e6904d346ad535ba990a376cd31c55db88f0e2cd70c18" Feb 03 00:32:30 crc kubenswrapper[4798]: E0203 00:32:30.544038 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7_service-telemetry(9cbb39d9-224e-4ecb-b734-ca8c8652e01d)\", failed to \"StartContainer\" for \"sg-core\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/sg-core:latest\\\"\"]" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7" podUID="9cbb39d9-224e-4ecb-b734-ca8c8652e01d" Feb 03 00:32:30 crc kubenswrapper[4798]: I0203 00:32:30.551794 4798 generic.go:334] "Generic (PLEG): container finished" podID="71313990-6f87-41d6-ae1c-d42b159dbb8c" containerID="ca404437d865302eccc381383a5439cae7b2c65150008498417b0f21330135ad" exitCode=0 Feb 03 00:32:30 crc kubenswrapper[4798]: I0203 00:32:30.552221 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh" event={"ID":"71313990-6f87-41d6-ae1c-d42b159dbb8c","Type":"ContainerDied","Data":"ca404437d865302eccc381383a5439cae7b2c65150008498417b0f21330135ad"} Feb 03 00:32:30 crc kubenswrapper[4798]: I0203 00:32:30.552953 4798 scope.go:117] "RemoveContainer" containerID="ca404437d865302eccc381383a5439cae7b2c65150008498417b0f21330135ad" Feb 03 00:32:30 crc kubenswrapper[4798]: E0203 00:32:30.559602 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh_service-telemetry(71313990-6f87-41d6-ae1c-d42b159dbb8c)\", failed to \"StartContainer\" for \"sg-core\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/sg-core:latest\\\"\"]" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh" podUID="71313990-6f87-41d6-ae1c-d42b159dbb8c" Feb 03 00:32:30 crc kubenswrapper[4798]: I0203 00:32:30.566112 4798 generic.go:334] "Generic (PLEG): container finished" podID="c88e7ac1-9440-4e1c-9140-09246e2588ce" containerID="8c4d4026b55813144c9afc3a0bd45519e88b9150bb7ee4b7b66b8185d64e4b98" exitCode=0 Feb 03 00:32:30 crc kubenswrapper[4798]: I0203 00:32:30.566237 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn" event={"ID":"c88e7ac1-9440-4e1c-9140-09246e2588ce","Type":"ContainerDied","Data":"8c4d4026b55813144c9afc3a0bd45519e88b9150bb7ee4b7b66b8185d64e4b98"} Feb 03 00:32:30 crc kubenswrapper[4798]: I0203 00:32:30.566842 4798 scope.go:117] "RemoveContainer" containerID="8c4d4026b55813144c9afc3a0bd45519e88b9150bb7ee4b7b66b8185d64e4b98" Feb 03 00:32:30 crc kubenswrapper[4798]: E0203 00:32:30.570354 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn_service-telemetry(c88e7ac1-9440-4e1c-9140-09246e2588ce)\", failed to \"StartContainer\" for \"sg-core\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/sg-core:latest\\\"\"]" pod="service-telemetry/default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn" podUID="c88e7ac1-9440-4e1c-9140-09246e2588ce" Feb 03 00:32:30 crc kubenswrapper[4798]: I0203 00:32:30.587352 4798 generic.go:334] "Generic (PLEG): container finished" podID="16e931c1-5d38-4fe7-8827-ea6cd99f3fb9" containerID="09fb640177f3bfe53802f48ad95b981587bb00533c23a6651af812446c38728d" exitCode=0 Feb 03 00:32:30 crc kubenswrapper[4798]: I0203 00:32:30.588678 4798 scope.go:117] "RemoveContainer" containerID="09fb640177f3bfe53802f48ad95b981587bb00533c23a6651af812446c38728d" Feb 03 00:32:30 crc kubenswrapper[4798]: I0203 00:32:30.588875 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt" event={"ID":"16e931c1-5d38-4fe7-8827-ea6cd99f3fb9","Type":"ContainerDied","Data":"09fb640177f3bfe53802f48ad95b981587bb00533c23a6651af812446c38728d"} Feb 03 00:32:30 crc kubenswrapper[4798]: I0203 00:32:30.592176 4798 scope.go:117] "RemoveContainer" containerID="9a3f087195fe4ec65f8e4afe36589e91ab61037a0b401e1f5db6c559c949835a" Feb 03 00:32:30 crc kubenswrapper[4798]: E0203 00:32:30.595786 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt_service-telemetry(16e931c1-5d38-4fe7-8827-ea6cd99f3fb9)\", failed to \"StartContainer\" for \"sg-core\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/sg-core:latest\\\"\"]" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt" podUID="16e931c1-5d38-4fe7-8827-ea6cd99f3fb9" Feb 03 00:32:30 crc kubenswrapper[4798]: I0203 00:32:30.631285 4798 scope.go:117] "RemoveContainer" containerID="49a7f044216ea0a26a2cb9d5e954bc932911fa2b38496585aecd144ac9774302" Feb 03 00:32:30 crc kubenswrapper[4798]: I0203 00:32:30.667615 4798 scope.go:117] "RemoveContainer" containerID="29cd18bdea9d3a6e3478968795c7046538f4dd97a9336de2dc505fe9f92bc5fa" Feb 03 00:32:30 crc kubenswrapper[4798]: I0203 00:32:30.700871 4798 scope.go:117] "RemoveContainer" containerID="dc69a0b6e5da00b28b2d7c70af6f0d118dd442669c4e52349ac1b2b39472b81a" Feb 03 00:32:31 crc kubenswrapper[4798]: I0203 00:32:31.596577 4798 scope.go:117] "RemoveContainer" containerID="ca404437d865302eccc381383a5439cae7b2c65150008498417b0f21330135ad" Feb 03 00:32:31 crc kubenswrapper[4798]: E0203 00:32:31.598550 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh_service-telemetry(71313990-6f87-41d6-ae1c-d42b159dbb8c)\", failed to \"StartContainer\" for \"sg-core\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/sg-core:latest\\\"\"]" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh" podUID="71313990-6f87-41d6-ae1c-d42b159dbb8c" Feb 03 00:32:31 crc kubenswrapper[4798]: I0203 00:32:31.598830 4798 scope.go:117] "RemoveContainer" containerID="09fb640177f3bfe53802f48ad95b981587bb00533c23a6651af812446c38728d" Feb 03 00:32:31 crc kubenswrapper[4798]: I0203 00:32:31.600324 4798 scope.go:117] "RemoveContainer" containerID="8c4d4026b55813144c9afc3a0bd45519e88b9150bb7ee4b7b66b8185d64e4b98" Feb 03 00:32:31 crc kubenswrapper[4798]: E0203 00:32:31.601396 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt_service-telemetry(16e931c1-5d38-4fe7-8827-ea6cd99f3fb9)\", failed to \"StartContainer\" for \"sg-core\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/sg-core:latest\\\"\"]" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt" podUID="16e931c1-5d38-4fe7-8827-ea6cd99f3fb9" Feb 03 00:32:31 crc kubenswrapper[4798]: E0203 00:32:31.601924 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn_service-telemetry(c88e7ac1-9440-4e1c-9140-09246e2588ce)\", failed to \"StartContainer\" for \"sg-core\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/sg-core:latest\\\"\"]" pod="service-telemetry/default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn" podUID="c88e7ac1-9440-4e1c-9140-09246e2588ce" Feb 03 00:32:31 crc kubenswrapper[4798]: I0203 00:32:31.602647 4798 scope.go:117] "RemoveContainer" containerID="e27053adb8fe85ef0483a3b12516ed299362289e7bf1bd62ed4746d31dac3ba7" Feb 03 00:32:31 crc kubenswrapper[4798]: E0203 00:32:31.602972 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-5d74b86695-k76jh_service-telemetry(05cade84-cb26-4953-984f-4c9c376378b1)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5d74b86695-k76jh" podUID="05cade84-cb26-4953-984f-4c9c376378b1" Feb 03 00:32:31 crc kubenswrapper[4798]: I0203 00:32:31.609205 4798 scope.go:117] "RemoveContainer" containerID="5b05582debabe54aae5e6904d346ad535ba990a376cd31c55db88f0e2cd70c18" Feb 03 00:32:31 crc kubenswrapper[4798]: E0203 00:32:31.611113 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7_service-telemetry(9cbb39d9-224e-4ecb-b734-ca8c8652e01d)\", failed to \"StartContainer\" for \"sg-core\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/sg-core:latest\\\"\"]" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7" podUID="9cbb39d9-224e-4ecb-b734-ca8c8652e01d" Feb 03 00:32:42 crc kubenswrapper[4798]: I0203 00:32:42.908354 4798 scope.go:117] "RemoveContainer" containerID="e27053adb8fe85ef0483a3b12516ed299362289e7bf1bd62ed4746d31dac3ba7" Feb 03 00:32:42 crc kubenswrapper[4798]: I0203 00:32:42.909018 4798 scope.go:117] "RemoveContainer" containerID="5b05582debabe54aae5e6904d346ad535ba990a376cd31c55db88f0e2cd70c18" Feb 03 00:32:43 crc kubenswrapper[4798]: I0203 00:32:43.684190 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5d74b86695-k76jh" event={"ID":"05cade84-cb26-4953-984f-4c9c376378b1","Type":"ContainerStarted","Data":"8f7682a28b0b814d013544e5b7b15da4312ed3632d28998b703fe3adfea691a8"} Feb 03 00:32:43 crc kubenswrapper[4798]: I0203 00:32:43.686390 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7" event={"ID":"9cbb39d9-224e-4ecb-b734-ca8c8652e01d","Type":"ContainerStarted","Data":"620ef6c7ae8c5bbd27456a6094021b4d8df1a2ecc8116309a4e81db213fba22e"} Feb 03 00:32:43 crc kubenswrapper[4798]: I0203 00:32:43.866298 4798 patch_prober.go:28] interesting pod/machine-config-daemon-b842j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 00:32:43 crc kubenswrapper[4798]: I0203 00:32:43.866370 4798 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b842j" podUID="c6602c86-f236-4772-b70f-a8b4847b95dd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 00:32:43 crc kubenswrapper[4798]: I0203 00:32:43.866422 4798 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b842j" Feb 03 00:32:43 crc kubenswrapper[4798]: I0203 00:32:43.867055 4798 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"efe22b7553ea1789aeef45bd5caacf46be6b6cbd47032c443a4e9ad6640e0650"} pod="openshift-machine-config-operator/machine-config-daemon-b842j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 00:32:43 crc kubenswrapper[4798]: I0203 00:32:43.867197 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b842j" podUID="c6602c86-f236-4772-b70f-a8b4847b95dd" containerName="machine-config-daemon" containerID="cri-o://efe22b7553ea1789aeef45bd5caacf46be6b6cbd47032c443a4e9ad6640e0650" gracePeriod=600 Feb 03 00:32:43 crc kubenswrapper[4798]: I0203 00:32:43.908100 4798 scope.go:117] "RemoveContainer" containerID="8c4d4026b55813144c9afc3a0bd45519e88b9150bb7ee4b7b66b8185d64e4b98" Feb 03 00:32:43 crc kubenswrapper[4798]: I0203 00:32:43.908870 4798 scope.go:117] "RemoveContainer" containerID="ca404437d865302eccc381383a5439cae7b2c65150008498417b0f21330135ad" Feb 03 00:32:44 crc kubenswrapper[4798]: I0203 00:32:44.701826 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn" event={"ID":"c88e7ac1-9440-4e1c-9140-09246e2588ce","Type":"ContainerStarted","Data":"c0c61acf51cfc0167d13ecf20c12cbfc227f963a451d948ee8a57311e92a3e47"} Feb 03 00:32:44 crc kubenswrapper[4798]: I0203 00:32:44.713821 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7" event={"ID":"9cbb39d9-224e-4ecb-b734-ca8c8652e01d","Type":"ContainerStarted","Data":"1ec1181bf207ce865f944b641e55b65a3808e1d14d91f937b095ae247ee90029"} Feb 03 00:32:44 crc kubenswrapper[4798]: I0203 00:32:44.718119 4798 generic.go:334] "Generic (PLEG): container finished" podID="c6602c86-f236-4772-b70f-a8b4847b95dd" containerID="efe22b7553ea1789aeef45bd5caacf46be6b6cbd47032c443a4e9ad6640e0650" exitCode=0 Feb 03 00:32:44 crc kubenswrapper[4798]: I0203 00:32:44.718158 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" event={"ID":"c6602c86-f236-4772-b70f-a8b4847b95dd","Type":"ContainerDied","Data":"efe22b7553ea1789aeef45bd5caacf46be6b6cbd47032c443a4e9ad6640e0650"} Feb 03 00:32:44 crc kubenswrapper[4798]: I0203 00:32:44.718223 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" event={"ID":"c6602c86-f236-4772-b70f-a8b4847b95dd","Type":"ContainerStarted","Data":"1654be6f1eed76bb7fed106076af4b53b33d7dbb9520c162c350851dcffaa681"} Feb 03 00:32:44 crc kubenswrapper[4798]: I0203 00:32:44.718248 4798 scope.go:117] "RemoveContainer" containerID="6897fa3c91ba906d1e45e4ba0d97338c3f5515d0f2babad3be934a41248b6f2b" Feb 03 00:32:44 crc kubenswrapper[4798]: I0203 00:32:44.721591 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh" event={"ID":"71313990-6f87-41d6-ae1c-d42b159dbb8c","Type":"ContainerStarted","Data":"242730f0948e0ad53425aa2e050fbaf7c40ee8c1192fc9fe012576e53cb2756c"} Feb 03 00:32:44 crc kubenswrapper[4798]: I0203 00:32:44.751930 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7" podStartSLOduration=4.951260719 podStartE2EDuration="49.751905623s" podCreationTimestamp="2026-02-03 00:31:55 +0000 UTC" firstStartedPulling="2026-02-03 00:31:59.489181507 +0000 UTC m=+1011.255171518" lastFinishedPulling="2026-02-03 00:32:44.289826421 +0000 UTC m=+1056.055816422" observedRunningTime="2026-02-03 00:32:44.731383299 +0000 UTC m=+1056.497373340" watchObservedRunningTime="2026-02-03 00:32:44.751905623 +0000 UTC m=+1056.517895634" Feb 03 00:32:45 crc kubenswrapper[4798]: I0203 00:32:45.736428 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh" event={"ID":"71313990-6f87-41d6-ae1c-d42b159dbb8c","Type":"ContainerStarted","Data":"af7117b2fc691563c1450b9735bfb0e17fcc213f3caa4e376e9b0d62399183df"} Feb 03 00:32:45 crc kubenswrapper[4798]: I0203 00:32:45.738686 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn" event={"ID":"c88e7ac1-9440-4e1c-9140-09246e2588ce","Type":"ContainerStarted","Data":"488e2f339e2d1b3335a411eb5c01c33ea280df2eee56e9b448ed750717930d02"} Feb 03 00:32:45 crc kubenswrapper[4798]: I0203 00:32:45.759294 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh" podStartSLOduration=3.285657811 podStartE2EDuration="56.759276853s" podCreationTimestamp="2026-02-03 00:31:49 +0000 UTC" firstStartedPulling="2026-02-03 00:31:51.886550468 +0000 UTC m=+1003.652540479" lastFinishedPulling="2026-02-03 00:32:45.36016951 +0000 UTC m=+1057.126159521" observedRunningTime="2026-02-03 00:32:45.753393621 +0000 UTC m=+1057.519383632" watchObservedRunningTime="2026-02-03 00:32:45.759276853 +0000 UTC m=+1057.525266864" Feb 03 00:32:45 crc kubenswrapper[4798]: I0203 00:32:45.782012 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn" podStartSLOduration=6.905072766 podStartE2EDuration="42.78197533s" podCreationTimestamp="2026-02-03 00:32:03 +0000 UTC" firstStartedPulling="2026-02-03 00:32:09.290207071 +0000 UTC m=+1021.056197082" lastFinishedPulling="2026-02-03 00:32:45.167109635 +0000 UTC m=+1056.933099646" observedRunningTime="2026-02-03 00:32:45.774114641 +0000 UTC m=+1057.540104672" watchObservedRunningTime="2026-02-03 00:32:45.78197533 +0000 UTC m=+1057.547965351" Feb 03 00:32:45 crc kubenswrapper[4798]: I0203 00:32:45.908189 4798 scope.go:117] "RemoveContainer" containerID="09fb640177f3bfe53802f48ad95b981587bb00533c23a6651af812446c38728d" Feb 03 00:32:46 crc kubenswrapper[4798]: I0203 00:32:46.747439 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt" event={"ID":"16e931c1-5d38-4fe7-8827-ea6cd99f3fb9","Type":"ContainerStarted","Data":"a6f8a8f4e37b900a52fad3ca9fb904dd9b2f82df17e7083a5658b1bd58cafdd8"} Feb 03 00:32:47 crc kubenswrapper[4798]: I0203 00:32:47.763074 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt" event={"ID":"16e931c1-5d38-4fe7-8827-ea6cd99f3fb9","Type":"ContainerStarted","Data":"e13f9d7377c746b51406f2c41c248d48f4db7595cda13ba41ae89131d60913f7"} Feb 03 00:32:47 crc kubenswrapper[4798]: I0203 00:32:47.785488 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt" podStartSLOduration=8.962544788 podStartE2EDuration="56.785466168s" podCreationTimestamp="2026-02-03 00:31:51 +0000 UTC" firstStartedPulling="2026-02-03 00:31:59.358397111 +0000 UTC m=+1011.124387122" lastFinishedPulling="2026-02-03 00:32:47.181318481 +0000 UTC m=+1058.947308502" observedRunningTime="2026-02-03 00:32:47.780682243 +0000 UTC m=+1059.546672284" watchObservedRunningTime="2026-02-03 00:32:47.785466168 +0000 UTC m=+1059.551456179" Feb 03 00:32:57 crc kubenswrapper[4798]: I0203 00:32:57.228576 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Feb 03 00:32:57 crc kubenswrapper[4798]: I0203 00:32:57.229806 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Feb 03 00:32:57 crc kubenswrapper[4798]: I0203 00:32:57.232538 4798 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-selfsigned" Feb 03 00:32:57 crc kubenswrapper[4798]: I0203 00:32:57.233798 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"qdr-test-config" Feb 03 00:32:57 crc kubenswrapper[4798]: I0203 00:32:57.241289 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Feb 03 00:32:57 crc kubenswrapper[4798]: I0203 00:32:57.256821 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/d4a7a6fe-905f-40e8-929d-07bd7e45d58e-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"d4a7a6fe-905f-40e8-929d-07bd7e45d58e\") " pod="service-telemetry/qdr-test" Feb 03 00:32:57 crc kubenswrapper[4798]: I0203 00:32:57.256868 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26vkm\" (UniqueName: \"kubernetes.io/projected/d4a7a6fe-905f-40e8-929d-07bd7e45d58e-kube-api-access-26vkm\") pod \"qdr-test\" (UID: \"d4a7a6fe-905f-40e8-929d-07bd7e45d58e\") " pod="service-telemetry/qdr-test" Feb 03 00:32:57 crc kubenswrapper[4798]: I0203 00:32:57.256899 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/d4a7a6fe-905f-40e8-929d-07bd7e45d58e-qdr-test-config\") pod \"qdr-test\" (UID: \"d4a7a6fe-905f-40e8-929d-07bd7e45d58e\") " pod="service-telemetry/qdr-test" Feb 03 00:32:57 crc kubenswrapper[4798]: I0203 00:32:57.358254 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/d4a7a6fe-905f-40e8-929d-07bd7e45d58e-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"d4a7a6fe-905f-40e8-929d-07bd7e45d58e\") " pod="service-telemetry/qdr-test" Feb 03 00:32:57 crc kubenswrapper[4798]: I0203 00:32:57.358317 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26vkm\" (UniqueName: \"kubernetes.io/projected/d4a7a6fe-905f-40e8-929d-07bd7e45d58e-kube-api-access-26vkm\") pod \"qdr-test\" (UID: \"d4a7a6fe-905f-40e8-929d-07bd7e45d58e\") " pod="service-telemetry/qdr-test" Feb 03 00:32:57 crc kubenswrapper[4798]: I0203 00:32:57.358355 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/d4a7a6fe-905f-40e8-929d-07bd7e45d58e-qdr-test-config\") pod \"qdr-test\" (UID: \"d4a7a6fe-905f-40e8-929d-07bd7e45d58e\") " pod="service-telemetry/qdr-test" Feb 03 00:32:57 crc kubenswrapper[4798]: I0203 00:32:57.359167 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/d4a7a6fe-905f-40e8-929d-07bd7e45d58e-qdr-test-config\") pod \"qdr-test\" (UID: \"d4a7a6fe-905f-40e8-929d-07bd7e45d58e\") " pod="service-telemetry/qdr-test" Feb 03 00:32:57 crc kubenswrapper[4798]: I0203 00:32:57.374493 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26vkm\" (UniqueName: \"kubernetes.io/projected/d4a7a6fe-905f-40e8-929d-07bd7e45d58e-kube-api-access-26vkm\") pod \"qdr-test\" (UID: \"d4a7a6fe-905f-40e8-929d-07bd7e45d58e\") " pod="service-telemetry/qdr-test" Feb 03 00:32:57 crc kubenswrapper[4798]: I0203 00:32:57.375247 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/d4a7a6fe-905f-40e8-929d-07bd7e45d58e-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"d4a7a6fe-905f-40e8-929d-07bd7e45d58e\") " pod="service-telemetry/qdr-test" Feb 03 00:32:57 crc kubenswrapper[4798]: I0203 00:32:57.557558 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Feb 03 00:32:57 crc kubenswrapper[4798]: I0203 00:32:57.783058 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Feb 03 00:32:57 crc kubenswrapper[4798]: I0203 00:32:57.835326 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"d4a7a6fe-905f-40e8-929d-07bd7e45d58e","Type":"ContainerStarted","Data":"7ec2c1cc6a9277803eae08e91c0393036f859ab4a6ce74af0ef09da475a527a4"} Feb 03 00:33:06 crc kubenswrapper[4798]: I0203 00:33:06.897346 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"d4a7a6fe-905f-40e8-929d-07bd7e45d58e","Type":"ContainerStarted","Data":"4149d6ef7a41062213a16d982a4f8e36103e3eff2e7bc3a5ef36b934b583a409"} Feb 03 00:33:06 crc kubenswrapper[4798]: I0203 00:33:06.916094 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=1.163487561 podStartE2EDuration="9.916070373s" podCreationTimestamp="2026-02-03 00:32:57 +0000 UTC" firstStartedPulling="2026-02-03 00:32:57.802246621 +0000 UTC m=+1069.568236632" lastFinishedPulling="2026-02-03 00:33:06.554829433 +0000 UTC m=+1078.320819444" observedRunningTime="2026-02-03 00:33:06.911292518 +0000 UTC m=+1078.677282539" watchObservedRunningTime="2026-02-03 00:33:06.916070373 +0000 UTC m=+1078.682060384" Feb 03 00:33:07 crc kubenswrapper[4798]: I0203 00:33:07.174757 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-cwr46"] Feb 03 00:33:07 crc kubenswrapper[4798]: I0203 00:33:07.176009 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-cwr46" Feb 03 00:33:07 crc kubenswrapper[4798]: I0203 00:33:07.178197 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-sensubility-config" Feb 03 00:33:07 crc kubenswrapper[4798]: I0203 00:33:07.178312 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-entrypoint-script" Feb 03 00:33:07 crc kubenswrapper[4798]: I0203 00:33:07.178332 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-healthcheck-log" Feb 03 00:33:07 crc kubenswrapper[4798]: I0203 00:33:07.178466 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-config" Feb 03 00:33:07 crc kubenswrapper[4798]: I0203 00:33:07.178598 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-entrypoint-script" Feb 03 00:33:07 crc kubenswrapper[4798]: I0203 00:33:07.180511 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-publisher" Feb 03 00:33:07 crc kubenswrapper[4798]: I0203 00:33:07.187923 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-cwr46"] Feb 03 00:33:07 crc kubenswrapper[4798]: I0203 00:33:07.196574 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/06103ceb-f7cd-44fd-af29-663a7027538d-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-cwr46\" (UID: \"06103ceb-f7cd-44fd-af29-663a7027538d\") " pod="service-telemetry/stf-smoketest-smoke1-cwr46" Feb 03 00:33:07 crc kubenswrapper[4798]: I0203 00:33:07.196643 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hhbt\" (UniqueName: \"kubernetes.io/projected/06103ceb-f7cd-44fd-af29-663a7027538d-kube-api-access-9hhbt\") pod \"stf-smoketest-smoke1-cwr46\" (UID: \"06103ceb-f7cd-44fd-af29-663a7027538d\") " pod="service-telemetry/stf-smoketest-smoke1-cwr46" Feb 03 00:33:07 crc kubenswrapper[4798]: I0203 00:33:07.196743 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/06103ceb-f7cd-44fd-af29-663a7027538d-ceilometer-publisher\") pod \"stf-smoketest-smoke1-cwr46\" (UID: \"06103ceb-f7cd-44fd-af29-663a7027538d\") " pod="service-telemetry/stf-smoketest-smoke1-cwr46" Feb 03 00:33:07 crc kubenswrapper[4798]: I0203 00:33:07.196779 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/06103ceb-f7cd-44fd-af29-663a7027538d-sensubility-config\") pod \"stf-smoketest-smoke1-cwr46\" (UID: \"06103ceb-f7cd-44fd-af29-663a7027538d\") " pod="service-telemetry/stf-smoketest-smoke1-cwr46" Feb 03 00:33:07 crc kubenswrapper[4798]: I0203 00:33:07.196873 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/06103ceb-f7cd-44fd-af29-663a7027538d-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-cwr46\" (UID: \"06103ceb-f7cd-44fd-af29-663a7027538d\") " pod="service-telemetry/stf-smoketest-smoke1-cwr46" Feb 03 00:33:07 crc kubenswrapper[4798]: I0203 00:33:07.196899 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/06103ceb-f7cd-44fd-af29-663a7027538d-collectd-config\") pod \"stf-smoketest-smoke1-cwr46\" (UID: \"06103ceb-f7cd-44fd-af29-663a7027538d\") " pod="service-telemetry/stf-smoketest-smoke1-cwr46" Feb 03 00:33:07 crc kubenswrapper[4798]: I0203 00:33:07.196922 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/06103ceb-f7cd-44fd-af29-663a7027538d-healthcheck-log\") pod \"stf-smoketest-smoke1-cwr46\" (UID: \"06103ceb-f7cd-44fd-af29-663a7027538d\") " pod="service-telemetry/stf-smoketest-smoke1-cwr46" Feb 03 00:33:07 crc kubenswrapper[4798]: I0203 00:33:07.298195 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hhbt\" (UniqueName: \"kubernetes.io/projected/06103ceb-f7cd-44fd-af29-663a7027538d-kube-api-access-9hhbt\") pod \"stf-smoketest-smoke1-cwr46\" (UID: \"06103ceb-f7cd-44fd-af29-663a7027538d\") " pod="service-telemetry/stf-smoketest-smoke1-cwr46" Feb 03 00:33:07 crc kubenswrapper[4798]: I0203 00:33:07.298408 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/06103ceb-f7cd-44fd-af29-663a7027538d-ceilometer-publisher\") pod \"stf-smoketest-smoke1-cwr46\" (UID: \"06103ceb-f7cd-44fd-af29-663a7027538d\") " pod="service-telemetry/stf-smoketest-smoke1-cwr46" Feb 03 00:33:07 crc kubenswrapper[4798]: I0203 00:33:07.298457 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/06103ceb-f7cd-44fd-af29-663a7027538d-sensubility-config\") pod \"stf-smoketest-smoke1-cwr46\" (UID: \"06103ceb-f7cd-44fd-af29-663a7027538d\") " pod="service-telemetry/stf-smoketest-smoke1-cwr46" Feb 03 00:33:07 crc kubenswrapper[4798]: I0203 00:33:07.298566 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/06103ceb-f7cd-44fd-af29-663a7027538d-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-cwr46\" (UID: \"06103ceb-f7cd-44fd-af29-663a7027538d\") " pod="service-telemetry/stf-smoketest-smoke1-cwr46" Feb 03 00:33:07 crc kubenswrapper[4798]: I0203 00:33:07.298611 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/06103ceb-f7cd-44fd-af29-663a7027538d-collectd-config\") pod \"stf-smoketest-smoke1-cwr46\" (UID: \"06103ceb-f7cd-44fd-af29-663a7027538d\") " pod="service-telemetry/stf-smoketest-smoke1-cwr46" Feb 03 00:33:07 crc kubenswrapper[4798]: I0203 00:33:07.298639 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/06103ceb-f7cd-44fd-af29-663a7027538d-healthcheck-log\") pod \"stf-smoketest-smoke1-cwr46\" (UID: \"06103ceb-f7cd-44fd-af29-663a7027538d\") " pod="service-telemetry/stf-smoketest-smoke1-cwr46" Feb 03 00:33:07 crc kubenswrapper[4798]: I0203 00:33:07.298711 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/06103ceb-f7cd-44fd-af29-663a7027538d-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-cwr46\" (UID: \"06103ceb-f7cd-44fd-af29-663a7027538d\") " pod="service-telemetry/stf-smoketest-smoke1-cwr46" Feb 03 00:33:07 crc kubenswrapper[4798]: I0203 00:33:07.299831 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/06103ceb-f7cd-44fd-af29-663a7027538d-ceilometer-publisher\") pod \"stf-smoketest-smoke1-cwr46\" (UID: \"06103ceb-f7cd-44fd-af29-663a7027538d\") " pod="service-telemetry/stf-smoketest-smoke1-cwr46" Feb 03 00:33:07 crc kubenswrapper[4798]: I0203 00:33:07.299833 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/06103ceb-f7cd-44fd-af29-663a7027538d-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-cwr46\" (UID: \"06103ceb-f7cd-44fd-af29-663a7027538d\") " pod="service-telemetry/stf-smoketest-smoke1-cwr46" Feb 03 00:33:07 crc kubenswrapper[4798]: I0203 00:33:07.300134 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/06103ceb-f7cd-44fd-af29-663a7027538d-sensubility-config\") pod \"stf-smoketest-smoke1-cwr46\" (UID: \"06103ceb-f7cd-44fd-af29-663a7027538d\") " pod="service-telemetry/stf-smoketest-smoke1-cwr46" Feb 03 00:33:07 crc kubenswrapper[4798]: I0203 00:33:07.300228 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/06103ceb-f7cd-44fd-af29-663a7027538d-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-cwr46\" (UID: \"06103ceb-f7cd-44fd-af29-663a7027538d\") " pod="service-telemetry/stf-smoketest-smoke1-cwr46" Feb 03 00:33:07 crc kubenswrapper[4798]: I0203 00:33:07.300938 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/06103ceb-f7cd-44fd-af29-663a7027538d-collectd-config\") pod \"stf-smoketest-smoke1-cwr46\" (UID: \"06103ceb-f7cd-44fd-af29-663a7027538d\") " pod="service-telemetry/stf-smoketest-smoke1-cwr46" Feb 03 00:33:07 crc kubenswrapper[4798]: I0203 00:33:07.300996 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/06103ceb-f7cd-44fd-af29-663a7027538d-healthcheck-log\") pod \"stf-smoketest-smoke1-cwr46\" (UID: \"06103ceb-f7cd-44fd-af29-663a7027538d\") " pod="service-telemetry/stf-smoketest-smoke1-cwr46" Feb 03 00:33:07 crc kubenswrapper[4798]: I0203 00:33:07.321954 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hhbt\" (UniqueName: \"kubernetes.io/projected/06103ceb-f7cd-44fd-af29-663a7027538d-kube-api-access-9hhbt\") pod \"stf-smoketest-smoke1-cwr46\" (UID: \"06103ceb-f7cd-44fd-af29-663a7027538d\") " pod="service-telemetry/stf-smoketest-smoke1-cwr46" Feb 03 00:33:07 crc kubenswrapper[4798]: I0203 00:33:07.511152 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-cwr46" Feb 03 00:33:07 crc kubenswrapper[4798]: I0203 00:33:07.543775 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Feb 03 00:33:07 crc kubenswrapper[4798]: I0203 00:33:07.544929 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 03 00:33:07 crc kubenswrapper[4798]: I0203 00:33:07.552524 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Feb 03 00:33:07 crc kubenswrapper[4798]: I0203 00:33:07.605646 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87nt6\" (UniqueName: \"kubernetes.io/projected/35939c26-65a1-4ea4-9c3b-e4fc2fcb0b52-kube-api-access-87nt6\") pod \"curl\" (UID: \"35939c26-65a1-4ea4-9c3b-e4fc2fcb0b52\") " pod="service-telemetry/curl" Feb 03 00:33:07 crc kubenswrapper[4798]: I0203 00:33:07.707476 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87nt6\" (UniqueName: \"kubernetes.io/projected/35939c26-65a1-4ea4-9c3b-e4fc2fcb0b52-kube-api-access-87nt6\") pod \"curl\" (UID: \"35939c26-65a1-4ea4-9c3b-e4fc2fcb0b52\") " pod="service-telemetry/curl" Feb 03 00:33:07 crc kubenswrapper[4798]: I0203 00:33:07.723425 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87nt6\" (UniqueName: \"kubernetes.io/projected/35939c26-65a1-4ea4-9c3b-e4fc2fcb0b52-kube-api-access-87nt6\") pod \"curl\" (UID: \"35939c26-65a1-4ea4-9c3b-e4fc2fcb0b52\") " pod="service-telemetry/curl" Feb 03 00:33:07 crc kubenswrapper[4798]: I0203 00:33:07.892502 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 03 00:33:07 crc kubenswrapper[4798]: I0203 00:33:07.924636 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-cwr46"] Feb 03 00:33:07 crc kubenswrapper[4798]: W0203 00:33:07.926326 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06103ceb_f7cd_44fd_af29_663a7027538d.slice/crio-845d806ee116b2e96f94d3b3fee56e4bb14c94df3220fc39483dd802da109fc2 WatchSource:0}: Error finding container 845d806ee116b2e96f94d3b3fee56e4bb14c94df3220fc39483dd802da109fc2: Status 404 returned error can't find the container with id 845d806ee116b2e96f94d3b3fee56e4bb14c94df3220fc39483dd802da109fc2 Feb 03 00:33:08 crc kubenswrapper[4798]: I0203 00:33:08.087860 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Feb 03 00:33:08 crc kubenswrapper[4798]: I0203 00:33:08.930302 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"35939c26-65a1-4ea4-9c3b-e4fc2fcb0b52","Type":"ContainerStarted","Data":"71baf040f846cd5250dc4f8b90613f969159ea0ee2a9b0a6711b0d77d9e2ce34"} Feb 03 00:33:08 crc kubenswrapper[4798]: I0203 00:33:08.932388 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-cwr46" event={"ID":"06103ceb-f7cd-44fd-af29-663a7027538d","Type":"ContainerStarted","Data":"845d806ee116b2e96f94d3b3fee56e4bb14c94df3220fc39483dd802da109fc2"} Feb 03 00:33:22 crc kubenswrapper[4798]: E0203 00:33:22.997058 4798 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/tripleomastercentos9/openstack-collectd:current-tripleo" Feb 03 00:33:22 crc kubenswrapper[4798]: E0203 00:33:22.998024 4798 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:smoketest-collectd,Image:quay.io/tripleomastercentos9/openstack-collectd:current-tripleo,Command:[/smoketest_collectd_entrypoint.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CLOUDNAME,Value:smoke1,ValueFrom:nil,},EnvVar{Name:ELASTICSEARCH_AUTH_PASS,Value:zGhO5VV2suIe9t2OBDbRVeHN,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_AUTH_TOKEN,Value:eyJhbGciOiJSUzI1NiIsImtpZCI6InF6SnFxNFFjbVk5VmJQZ2dNMmUxdHFmTlJlVWx4UDhSTlhIamV3RUx4WU0ifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjIl0sImV4cCI6MTc3MDA4MjM3MiwiaWF0IjoxNzcwMDc4NzcyLCJpc3MiOiJodHRwczovL2t1YmVybmV0ZXMuZGVmYXVsdC5zdmMiLCJqdGkiOiIzZWM2OTM0Ni1lMGE4LTQ5YmUtOTUyMC1hN2MxYmEyYWUyZDkiLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6InNlcnZpY2UtdGVsZW1ldHJ5Iiwic2VydmljZWFjY291bnQiOnsibmFtZSI6InN0Zi1wcm9tZXRoZXVzLXJlYWRlciIsInVpZCI6IjQ2YjZmYzc2LTNhZTUtNDg3OS1hYWFjLWFiOWFjNmRkMTNiZSJ9fSwibmJmIjoxNzcwMDc4NzcyLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6c2VydmljZS10ZWxlbWV0cnk6c3RmLXByb21ldGhldXMtcmVhZGVyIn0.PeZ8wgANomt9hEEizuTlwNZ2yT1xNYZHym51D-o2971QlCIbMdNeXLo8VKip8zIznWuUQW19itdBZuz1IrNHOmBC_vGV_eVG0Riu4yx6eHq1qle5g1jbodC24gV9Zbd93Jj0Nb0GUdfK19panCSHq4pySsOISfg_rpElFJsnM4UiTl1Fv0S538OtjUsRXb38tgGn6UNBkvItFyAFltOcq2Ekr01sqxyB31PlZyezPCkGZzyn0oe9yJsA-nizTGjKgMf_wDyDqOK_qTWPXXzoyBh9CjyeUDCLOxtZ11oxMzljh45etoNFq9WfUny9XXHEgAQvUg7p41j0OZku1DWjUN4i-sR7rg9e1Dle-lp0BOWnEdu3FfuKNiI1pXh8hgviZexb-zAEyo1T5NC7-v3366bPsGnPD9Qn0hqwjeQ04vRkQttnffAPaE8_DMq5rmImrqpQ-TfUKzNE9i_hPWISbe--Hn7SPpbA9o_afOdfgTH6xvYBX3HtfKfwFx0ybHnLs_vZSdNdxVaLwIwucmp2yOv_ptbITPkffGqJH3rV0AWupD7lyzWc_dyyjZq0NiyJH1YmabZX0gxy6u1rdAhb7JSBCKwSXFK6Qfp2pt3EvQijeeNfYK4f92voDkLYeHFSpm5KAYJDK7MCLmEB8YJyR6VnhN0BU_y9Gprgf5uppbw,ValueFrom:nil,},EnvVar{Name:OBSERVABILITY_STRATEGY,Value:<>,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:collectd-config,ReadOnly:false,MountPath:/etc/minimal-collectd.conf.template,SubPath:minimal-collectd.conf.template,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:sensubility-config,ReadOnly:false,MountPath:/etc/collectd-sensubility.conf,SubPath:collectd-sensubility.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:healthcheck-log,ReadOnly:false,MountPath:/healthcheck.log,SubPath:healthcheck.log,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:collectd-entrypoint-script,ReadOnly:false,MountPath:/smoketest_collectd_entrypoint.sh,SubPath:smoketest_collectd_entrypoint.sh,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9hhbt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod stf-smoketest-smoke1-cwr46_service-telemetry(06103ceb-f7cd-44fd-af29-663a7027538d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 00:33:24 crc kubenswrapper[4798]: I0203 00:33:24.084330 4798 generic.go:334] "Generic (PLEG): container finished" podID="35939c26-65a1-4ea4-9c3b-e4fc2fcb0b52" containerID="ed523ed13d71f8d3d5e7c5847dbcf8a8fb7218b192b9f79a99c1051241c6c21e" exitCode=0 Feb 03 00:33:24 crc kubenswrapper[4798]: I0203 00:33:24.084391 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"35939c26-65a1-4ea4-9c3b-e4fc2fcb0b52","Type":"ContainerDied","Data":"ed523ed13d71f8d3d5e7c5847dbcf8a8fb7218b192b9f79a99c1051241c6c21e"} Feb 03 00:33:25 crc kubenswrapper[4798]: I0203 00:33:25.331425 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 03 00:33:25 crc kubenswrapper[4798]: I0203 00:33:25.440033 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87nt6\" (UniqueName: \"kubernetes.io/projected/35939c26-65a1-4ea4-9c3b-e4fc2fcb0b52-kube-api-access-87nt6\") pod \"35939c26-65a1-4ea4-9c3b-e4fc2fcb0b52\" (UID: \"35939c26-65a1-4ea4-9c3b-e4fc2fcb0b52\") " Feb 03 00:33:25 crc kubenswrapper[4798]: I0203 00:33:25.445814 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35939c26-65a1-4ea4-9c3b-e4fc2fcb0b52-kube-api-access-87nt6" (OuterVolumeSpecName: "kube-api-access-87nt6") pod "35939c26-65a1-4ea4-9c3b-e4fc2fcb0b52" (UID: "35939c26-65a1-4ea4-9c3b-e4fc2fcb0b52"). InnerVolumeSpecName "kube-api-access-87nt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:33:25 crc kubenswrapper[4798]: I0203 00:33:25.477334 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_35939c26-65a1-4ea4-9c3b-e4fc2fcb0b52/curl/0.log" Feb 03 00:33:25 crc kubenswrapper[4798]: I0203 00:33:25.541607 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87nt6\" (UniqueName: \"kubernetes.io/projected/35939c26-65a1-4ea4-9c3b-e4fc2fcb0b52-kube-api-access-87nt6\") on node \"crc\" DevicePath \"\"" Feb 03 00:33:25 crc kubenswrapper[4798]: I0203 00:33:25.734600 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-78bcbbdcff-wwz4f_9513eb34-b9eb-4ea3-b49f-4c468e08a5d5/prometheus-webhook-snmp/0.log" Feb 03 00:33:26 crc kubenswrapper[4798]: I0203 00:33:26.108091 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"35939c26-65a1-4ea4-9c3b-e4fc2fcb0b52","Type":"ContainerDied","Data":"71baf040f846cd5250dc4f8b90613f969159ea0ee2a9b0a6711b0d77d9e2ce34"} Feb 03 00:33:26 crc kubenswrapper[4798]: I0203 00:33:26.108154 4798 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71baf040f846cd5250dc4f8b90613f969159ea0ee2a9b0a6711b0d77d9e2ce34" Feb 03 00:33:26 crc kubenswrapper[4798]: I0203 00:33:26.108262 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 03 00:33:37 crc kubenswrapper[4798]: E0203 00:33:37.736540 4798 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/tripleomastercentos9/openstack-ceilometer-notification:current-tripleo" Feb 03 00:33:37 crc kubenswrapper[4798]: E0203 00:33:37.737379 4798 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:smoketest-ceilometer,Image:quay.io/tripleomastercentos9/openstack-ceilometer-notification:current-tripleo,Command:[/smoketest_ceilometer_entrypoint.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CLOUDNAME,Value:smoke1,ValueFrom:nil,},EnvVar{Name:ELASTICSEARCH_AUTH_PASS,Value:zGhO5VV2suIe9t2OBDbRVeHN,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_AUTH_TOKEN,Value:eyJhbGciOiJSUzI1NiIsImtpZCI6InF6SnFxNFFjbVk5VmJQZ2dNMmUxdHFmTlJlVWx4UDhSTlhIamV3RUx4WU0ifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjIl0sImV4cCI6MTc3MDA4MjM3MiwiaWF0IjoxNzcwMDc4NzcyLCJpc3MiOiJodHRwczovL2t1YmVybmV0ZXMuZGVmYXVsdC5zdmMiLCJqdGkiOiIzZWM2OTM0Ni1lMGE4LTQ5YmUtOTUyMC1hN2MxYmEyYWUyZDkiLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6InNlcnZpY2UtdGVsZW1ldHJ5Iiwic2VydmljZWFjY291bnQiOnsibmFtZSI6InN0Zi1wcm9tZXRoZXVzLXJlYWRlciIsInVpZCI6IjQ2YjZmYzc2LTNhZTUtNDg3OS1hYWFjLWFiOWFjNmRkMTNiZSJ9fSwibmJmIjoxNzcwMDc4NzcyLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6c2VydmljZS10ZWxlbWV0cnk6c3RmLXByb21ldGhldXMtcmVhZGVyIn0.PeZ8wgANomt9hEEizuTlwNZ2yT1xNYZHym51D-o2971QlCIbMdNeXLo8VKip8zIznWuUQW19itdBZuz1IrNHOmBC_vGV_eVG0Riu4yx6eHq1qle5g1jbodC24gV9Zbd93Jj0Nb0GUdfK19panCSHq4pySsOISfg_rpElFJsnM4UiTl1Fv0S538OtjUsRXb38tgGn6UNBkvItFyAFltOcq2Ekr01sqxyB31PlZyezPCkGZzyn0oe9yJsA-nizTGjKgMf_wDyDqOK_qTWPXXzoyBh9CjyeUDCLOxtZ11oxMzljh45etoNFq9WfUny9XXHEgAQvUg7p41j0OZku1DWjUN4i-sR7rg9e1Dle-lp0BOWnEdu3FfuKNiI1pXh8hgviZexb-zAEyo1T5NC7-v3366bPsGnPD9Qn0hqwjeQ04vRkQttnffAPaE8_DMq5rmImrqpQ-TfUKzNE9i_hPWISbe--Hn7SPpbA9o_afOdfgTH6xvYBX3HtfKfwFx0ybHnLs_vZSdNdxVaLwIwucmp2yOv_ptbITPkffGqJH3rV0AWupD7lyzWc_dyyjZq0NiyJH1YmabZX0gxy6u1rdAhb7JSBCKwSXFK6Qfp2pt3EvQijeeNfYK4f92voDkLYeHFSpm5KAYJDK7MCLmEB8YJyR6VnhN0BU_y9Gprgf5uppbw,ValueFrom:nil,},EnvVar{Name:OBSERVABILITY_STRATEGY,Value:<>,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ceilometer-publisher,ReadOnly:false,MountPath:/ceilometer_publish.py,SubPath:ceilometer_publish.py,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ceilometer-entrypoint-script,ReadOnly:false,MountPath:/smoketest_ceilometer_entrypoint.sh,SubPath:smoketest_ceilometer_entrypoint.sh,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9hhbt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod stf-smoketest-smoke1-cwr46_service-telemetry(06103ceb-f7cd-44fd-af29-663a7027538d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 00:33:37 crc kubenswrapper[4798]: E0203 00:33:37.738701 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"smoketest-collectd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"smoketest-ceilometer\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="service-telemetry/stf-smoketest-smoke1-cwr46" podUID="06103ceb-f7cd-44fd-af29-663a7027538d" Feb 03 00:33:38 crc kubenswrapper[4798]: E0203 00:33:38.998491 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"smoketest-ceilometer\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/tripleomastercentos9/openstack-ceilometer-notification:current-tripleo\\\"\"" pod="service-telemetry/stf-smoketest-smoke1-cwr46" podUID="06103ceb-f7cd-44fd-af29-663a7027538d" Feb 03 00:33:39 crc kubenswrapper[4798]: I0203 00:33:39.199447 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-cwr46" event={"ID":"06103ceb-f7cd-44fd-af29-663a7027538d","Type":"ContainerStarted","Data":"d5dcc37cf96fa93bb2c5cdc795f8b3ae8b620c5036fc483166fdd503fcd99d61"} Feb 03 00:33:39 crc kubenswrapper[4798]: E0203 00:33:39.201397 4798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"smoketest-ceilometer\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/tripleomastercentos9/openstack-ceilometer-notification:current-tripleo\\\"\"" pod="service-telemetry/stf-smoketest-smoke1-cwr46" podUID="06103ceb-f7cd-44fd-af29-663a7027538d" Feb 03 00:33:54 crc kubenswrapper[4798]: I0203 00:33:54.303120 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-cwr46" event={"ID":"06103ceb-f7cd-44fd-af29-663a7027538d","Type":"ContainerStarted","Data":"79c361e0ea01e9cda03eac25a1e3312993c20b7730c10135803410dcb51536e2"} Feb 03 00:33:54 crc kubenswrapper[4798]: I0203 00:33:54.329924 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-cwr46" podStartSLOduration=1.843394849 podStartE2EDuration="47.329901715s" podCreationTimestamp="2026-02-03 00:33:07 +0000 UTC" firstStartedPulling="2026-02-03 00:33:07.928874304 +0000 UTC m=+1079.694864315" lastFinishedPulling="2026-02-03 00:33:53.41538117 +0000 UTC m=+1125.181371181" observedRunningTime="2026-02-03 00:33:54.325407382 +0000 UTC m=+1126.091397403" watchObservedRunningTime="2026-02-03 00:33:54.329901715 +0000 UTC m=+1126.095891736" Feb 03 00:33:55 crc kubenswrapper[4798]: I0203 00:33:55.889394 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-78bcbbdcff-wwz4f_9513eb34-b9eb-4ea3-b49f-4c468e08a5d5/prometheus-webhook-snmp/0.log" Feb 03 00:34:13 crc kubenswrapper[4798]: I0203 00:34:13.437015 4798 generic.go:334] "Generic (PLEG): container finished" podID="06103ceb-f7cd-44fd-af29-663a7027538d" containerID="d5dcc37cf96fa93bb2c5cdc795f8b3ae8b620c5036fc483166fdd503fcd99d61" exitCode=0 Feb 03 00:34:13 crc kubenswrapper[4798]: I0203 00:34:13.437106 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-cwr46" event={"ID":"06103ceb-f7cd-44fd-af29-663a7027538d","Type":"ContainerDied","Data":"d5dcc37cf96fa93bb2c5cdc795f8b3ae8b620c5036fc483166fdd503fcd99d61"} Feb 03 00:34:13 crc kubenswrapper[4798]: I0203 00:34:13.437782 4798 scope.go:117] "RemoveContainer" containerID="d5dcc37cf96fa93bb2c5cdc795f8b3ae8b620c5036fc483166fdd503fcd99d61" Feb 03 00:34:25 crc kubenswrapper[4798]: I0203 00:34:25.522599 4798 generic.go:334] "Generic (PLEG): container finished" podID="06103ceb-f7cd-44fd-af29-663a7027538d" containerID="79c361e0ea01e9cda03eac25a1e3312993c20b7730c10135803410dcb51536e2" exitCode=0 Feb 03 00:34:25 crc kubenswrapper[4798]: I0203 00:34:25.522638 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-cwr46" event={"ID":"06103ceb-f7cd-44fd-af29-663a7027538d","Type":"ContainerDied","Data":"79c361e0ea01e9cda03eac25a1e3312993c20b7730c10135803410dcb51536e2"} Feb 03 00:34:26 crc kubenswrapper[4798]: I0203 00:34:26.842563 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-cwr46" Feb 03 00:34:26 crc kubenswrapper[4798]: I0203 00:34:26.936873 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hhbt\" (UniqueName: \"kubernetes.io/projected/06103ceb-f7cd-44fd-af29-663a7027538d-kube-api-access-9hhbt\") pod \"06103ceb-f7cd-44fd-af29-663a7027538d\" (UID: \"06103ceb-f7cd-44fd-af29-663a7027538d\") " Feb 03 00:34:26 crc kubenswrapper[4798]: I0203 00:34:26.936924 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/06103ceb-f7cd-44fd-af29-663a7027538d-collectd-config\") pod \"06103ceb-f7cd-44fd-af29-663a7027538d\" (UID: \"06103ceb-f7cd-44fd-af29-663a7027538d\") " Feb 03 00:34:26 crc kubenswrapper[4798]: I0203 00:34:26.936979 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/06103ceb-f7cd-44fd-af29-663a7027538d-ceilometer-publisher\") pod \"06103ceb-f7cd-44fd-af29-663a7027538d\" (UID: \"06103ceb-f7cd-44fd-af29-663a7027538d\") " Feb 03 00:34:26 crc kubenswrapper[4798]: I0203 00:34:26.937032 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/06103ceb-f7cd-44fd-af29-663a7027538d-collectd-entrypoint-script\") pod \"06103ceb-f7cd-44fd-af29-663a7027538d\" (UID: \"06103ceb-f7cd-44fd-af29-663a7027538d\") " Feb 03 00:34:26 crc kubenswrapper[4798]: I0203 00:34:26.937087 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/06103ceb-f7cd-44fd-af29-663a7027538d-sensubility-config\") pod \"06103ceb-f7cd-44fd-af29-663a7027538d\" (UID: \"06103ceb-f7cd-44fd-af29-663a7027538d\") " Feb 03 00:34:26 crc kubenswrapper[4798]: I0203 00:34:26.937125 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/06103ceb-f7cd-44fd-af29-663a7027538d-ceilometer-entrypoint-script\") pod \"06103ceb-f7cd-44fd-af29-663a7027538d\" (UID: \"06103ceb-f7cd-44fd-af29-663a7027538d\") " Feb 03 00:34:26 crc kubenswrapper[4798]: I0203 00:34:26.937155 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/06103ceb-f7cd-44fd-af29-663a7027538d-healthcheck-log\") pod \"06103ceb-f7cd-44fd-af29-663a7027538d\" (UID: \"06103ceb-f7cd-44fd-af29-663a7027538d\") " Feb 03 00:34:26 crc kubenswrapper[4798]: I0203 00:34:26.941720 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06103ceb-f7cd-44fd-af29-663a7027538d-kube-api-access-9hhbt" (OuterVolumeSpecName: "kube-api-access-9hhbt") pod "06103ceb-f7cd-44fd-af29-663a7027538d" (UID: "06103ceb-f7cd-44fd-af29-663a7027538d"). InnerVolumeSpecName "kube-api-access-9hhbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:34:26 crc kubenswrapper[4798]: I0203 00:34:26.952345 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06103ceb-f7cd-44fd-af29-663a7027538d-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "06103ceb-f7cd-44fd-af29-663a7027538d" (UID: "06103ceb-f7cd-44fd-af29-663a7027538d"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:34:26 crc kubenswrapper[4798]: I0203 00:34:26.955323 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06103ceb-f7cd-44fd-af29-663a7027538d-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "06103ceb-f7cd-44fd-af29-663a7027538d" (UID: "06103ceb-f7cd-44fd-af29-663a7027538d"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:34:26 crc kubenswrapper[4798]: I0203 00:34:26.955384 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06103ceb-f7cd-44fd-af29-663a7027538d-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "06103ceb-f7cd-44fd-af29-663a7027538d" (UID: "06103ceb-f7cd-44fd-af29-663a7027538d"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:34:26 crc kubenswrapper[4798]: I0203 00:34:26.956255 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06103ceb-f7cd-44fd-af29-663a7027538d-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "06103ceb-f7cd-44fd-af29-663a7027538d" (UID: "06103ceb-f7cd-44fd-af29-663a7027538d"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:34:26 crc kubenswrapper[4798]: I0203 00:34:26.964918 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06103ceb-f7cd-44fd-af29-663a7027538d-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "06103ceb-f7cd-44fd-af29-663a7027538d" (UID: "06103ceb-f7cd-44fd-af29-663a7027538d"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:34:26 crc kubenswrapper[4798]: I0203 00:34:26.965610 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06103ceb-f7cd-44fd-af29-663a7027538d-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "06103ceb-f7cd-44fd-af29-663a7027538d" (UID: "06103ceb-f7cd-44fd-af29-663a7027538d"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 00:34:27 crc kubenswrapper[4798]: I0203 00:34:27.038603 4798 reconciler_common.go:293] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/06103ceb-f7cd-44fd-af29-663a7027538d-healthcheck-log\") on node \"crc\" DevicePath \"\"" Feb 03 00:34:27 crc kubenswrapper[4798]: I0203 00:34:27.038640 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hhbt\" (UniqueName: \"kubernetes.io/projected/06103ceb-f7cd-44fd-af29-663a7027538d-kube-api-access-9hhbt\") on node \"crc\" DevicePath \"\"" Feb 03 00:34:27 crc kubenswrapper[4798]: I0203 00:34:27.038675 4798 reconciler_common.go:293] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/06103ceb-f7cd-44fd-af29-663a7027538d-collectd-config\") on node \"crc\" DevicePath \"\"" Feb 03 00:34:27 crc kubenswrapper[4798]: I0203 00:34:27.038684 4798 reconciler_common.go:293] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/06103ceb-f7cd-44fd-af29-663a7027538d-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Feb 03 00:34:27 crc kubenswrapper[4798]: I0203 00:34:27.038693 4798 reconciler_common.go:293] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/06103ceb-f7cd-44fd-af29-663a7027538d-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Feb 03 00:34:27 crc kubenswrapper[4798]: I0203 00:34:27.038702 4798 reconciler_common.go:293] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/06103ceb-f7cd-44fd-af29-663a7027538d-sensubility-config\") on node \"crc\" DevicePath \"\"" Feb 03 00:34:27 crc kubenswrapper[4798]: I0203 00:34:27.038711 4798 reconciler_common.go:293] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/06103ceb-f7cd-44fd-af29-663a7027538d-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Feb 03 00:34:27 crc kubenswrapper[4798]: I0203 00:34:27.537809 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-cwr46" event={"ID":"06103ceb-f7cd-44fd-af29-663a7027538d","Type":"ContainerDied","Data":"845d806ee116b2e96f94d3b3fee56e4bb14c94df3220fc39483dd802da109fc2"} Feb 03 00:34:27 crc kubenswrapper[4798]: I0203 00:34:27.538157 4798 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="845d806ee116b2e96f94d3b3fee56e4bb14c94df3220fc39483dd802da109fc2" Feb 03 00:34:27 crc kubenswrapper[4798]: I0203 00:34:27.538227 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-cwr46" Feb 03 00:34:28 crc kubenswrapper[4798]: I0203 00:34:28.587429 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-qjrvb"] Feb 03 00:34:28 crc kubenswrapper[4798]: E0203 00:34:28.587698 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06103ceb-f7cd-44fd-af29-663a7027538d" containerName="smoketest-collectd" Feb 03 00:34:28 crc kubenswrapper[4798]: I0203 00:34:28.587710 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="06103ceb-f7cd-44fd-af29-663a7027538d" containerName="smoketest-collectd" Feb 03 00:34:28 crc kubenswrapper[4798]: E0203 00:34:28.587725 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35939c26-65a1-4ea4-9c3b-e4fc2fcb0b52" containerName="curl" Feb 03 00:34:28 crc kubenswrapper[4798]: I0203 00:34:28.587731 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="35939c26-65a1-4ea4-9c3b-e4fc2fcb0b52" containerName="curl" Feb 03 00:34:28 crc kubenswrapper[4798]: E0203 00:34:28.587755 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06103ceb-f7cd-44fd-af29-663a7027538d" containerName="smoketest-ceilometer" Feb 03 00:34:28 crc kubenswrapper[4798]: I0203 00:34:28.587762 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="06103ceb-f7cd-44fd-af29-663a7027538d" containerName="smoketest-ceilometer" Feb 03 00:34:28 crc kubenswrapper[4798]: I0203 00:34:28.587866 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="06103ceb-f7cd-44fd-af29-663a7027538d" containerName="smoketest-collectd" Feb 03 00:34:28 crc kubenswrapper[4798]: I0203 00:34:28.587878 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="35939c26-65a1-4ea4-9c3b-e4fc2fcb0b52" containerName="curl" Feb 03 00:34:28 crc kubenswrapper[4798]: I0203 00:34:28.587887 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="06103ceb-f7cd-44fd-af29-663a7027538d" containerName="smoketest-ceilometer" Feb 03 00:34:28 crc kubenswrapper[4798]: I0203 00:34:28.588314 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-qjrvb" Feb 03 00:34:28 crc kubenswrapper[4798]: I0203 00:34:28.599361 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-qjrvb"] Feb 03 00:34:28 crc kubenswrapper[4798]: I0203 00:34:28.660206 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n2qk\" (UniqueName: \"kubernetes.io/projected/4109a41f-95f4-4c6c-b074-8f3e98060edb-kube-api-access-5n2qk\") pod \"infrawatch-operators-qjrvb\" (UID: \"4109a41f-95f4-4c6c-b074-8f3e98060edb\") " pod="service-telemetry/infrawatch-operators-qjrvb" Feb 03 00:34:28 crc kubenswrapper[4798]: I0203 00:34:28.762082 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n2qk\" (UniqueName: \"kubernetes.io/projected/4109a41f-95f4-4c6c-b074-8f3e98060edb-kube-api-access-5n2qk\") pod \"infrawatch-operators-qjrvb\" (UID: \"4109a41f-95f4-4c6c-b074-8f3e98060edb\") " pod="service-telemetry/infrawatch-operators-qjrvb" Feb 03 00:34:28 crc kubenswrapper[4798]: I0203 00:34:28.787750 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n2qk\" (UniqueName: \"kubernetes.io/projected/4109a41f-95f4-4c6c-b074-8f3e98060edb-kube-api-access-5n2qk\") pod \"infrawatch-operators-qjrvb\" (UID: \"4109a41f-95f4-4c6c-b074-8f3e98060edb\") " pod="service-telemetry/infrawatch-operators-qjrvb" Feb 03 00:34:28 crc kubenswrapper[4798]: I0203 00:34:28.914500 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-qjrvb" Feb 03 00:34:29 crc kubenswrapper[4798]: I0203 00:34:29.011588 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-cwr46_06103ceb-f7cd-44fd-af29-663a7027538d/smoketest-collectd/0.log" Feb 03 00:34:29 crc kubenswrapper[4798]: I0203 00:34:29.280620 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-cwr46_06103ceb-f7cd-44fd-af29-663a7027538d/smoketest-ceilometer/0.log" Feb 03 00:34:29 crc kubenswrapper[4798]: I0203 00:34:29.329623 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-qjrvb"] Feb 03 00:34:29 crc kubenswrapper[4798]: I0203 00:34:29.564243 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-qjrvb" event={"ID":"4109a41f-95f4-4c6c-b074-8f3e98060edb","Type":"ContainerStarted","Data":"f134cc372eb93d5b6d1f8d70110165f77491cebcac5b887d8f5f7253cfc3318a"} Feb 03 00:34:29 crc kubenswrapper[4798]: I0203 00:34:29.584011 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-68864d46cb-f6v48_3a197696-3010-4eb8-8a9a-9cdf609ea136/default-interconnect/0.log" Feb 03 00:34:29 crc kubenswrapper[4798]: I0203 00:34:29.905947 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh_71313990-6f87-41d6-ae1c-d42b159dbb8c/bridge/2.log" Feb 03 00:34:30 crc kubenswrapper[4798]: I0203 00:34:30.261881 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7996dc9458-2fjkh_71313990-6f87-41d6-ae1c-d42b159dbb8c/sg-core/0.log" Feb 03 00:34:30 crc kubenswrapper[4798]: I0203 00:34:30.578207 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn_c88e7ac1-9440-4e1c-9140-09246e2588ce/bridge/2.log" Feb 03 00:34:30 crc kubenswrapper[4798]: I0203 00:34:30.582246 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-qjrvb" event={"ID":"4109a41f-95f4-4c6c-b074-8f3e98060edb","Type":"ContainerStarted","Data":"0e3d08d0d9fdb2137f9447d0d95eec20555ad757e5a6183e7d7db115ff2b0e7c"} Feb 03 00:34:30 crc kubenswrapper[4798]: I0203 00:34:30.607995 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-qjrvb" podStartSLOduration=2.4839274749999998 podStartE2EDuration="2.607967059s" podCreationTimestamp="2026-02-03 00:34:28 +0000 UTC" firstStartedPulling="2026-02-03 00:34:29.340026578 +0000 UTC m=+1161.106016629" lastFinishedPulling="2026-02-03 00:34:29.464066202 +0000 UTC m=+1161.230056213" observedRunningTime="2026-02-03 00:34:30.601406841 +0000 UTC m=+1162.367396882" watchObservedRunningTime="2026-02-03 00:34:30.607967059 +0000 UTC m=+1162.373957080" Feb 03 00:34:30 crc kubenswrapper[4798]: I0203 00:34:30.828405 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-58c67b649d-vd8cn_c88e7ac1-9440-4e1c-9140-09246e2588ce/sg-core/0.log" Feb 03 00:34:31 crc kubenswrapper[4798]: I0203 00:34:31.085883 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt_16e931c1-5d38-4fe7-8827-ea6cd99f3fb9/bridge/2.log" Feb 03 00:34:31 crc kubenswrapper[4798]: I0203 00:34:31.325490 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-b57f974ff-lt7pt_16e931c1-5d38-4fe7-8827-ea6cd99f3fb9/sg-core/0.log" Feb 03 00:34:31 crc kubenswrapper[4798]: I0203 00:34:31.584443 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-5d74b86695-k76jh_05cade84-cb26-4953-984f-4c9c376378b1/bridge/2.log" Feb 03 00:34:31 crc kubenswrapper[4798]: I0203 00:34:31.848532 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-5d74b86695-k76jh_05cade84-cb26-4953-984f-4c9c376378b1/sg-core/0.log" Feb 03 00:34:32 crc kubenswrapper[4798]: I0203 00:34:32.114345 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7_9cbb39d9-224e-4ecb-b734-ca8c8652e01d/bridge/2.log" Feb 03 00:34:32 crc kubenswrapper[4798]: I0203 00:34:32.390837 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-6864f4fb65-dppg7_9cbb39d9-224e-4ecb-b734-ca8c8652e01d/sg-core/0.log" Feb 03 00:34:34 crc kubenswrapper[4798]: I0203 00:34:34.653986 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bbbc889bc-rjnfk_c46c738a-1240-42e9-9d41-adab078b0a95/operator/0.log" Feb 03 00:34:35 crc kubenswrapper[4798]: I0203 00:34:35.006362 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_fb8978b7-2e43-4d75-a710-00e122a6f9a7/prometheus/0.log" Feb 03 00:34:35 crc kubenswrapper[4798]: I0203 00:34:35.330164 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_eee642f8-9cb1-4c5a-8f54-0f1826c3e2a8/elasticsearch/0.log" Feb 03 00:34:35 crc kubenswrapper[4798]: I0203 00:34:35.618753 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-78bcbbdcff-wwz4f_9513eb34-b9eb-4ea3-b49f-4c468e08a5d5/prometheus-webhook-snmp/0.log" Feb 03 00:34:35 crc kubenswrapper[4798]: I0203 00:34:35.944275 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_2067d204-5ad9-43a2-9233-ee24c671e516/alertmanager/0.log" Feb 03 00:34:38 crc kubenswrapper[4798]: I0203 00:34:38.918407 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/infrawatch-operators-qjrvb" Feb 03 00:34:38 crc kubenswrapper[4798]: I0203 00:34:38.919632 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-qjrvb" Feb 03 00:34:38 crc kubenswrapper[4798]: I0203 00:34:38.953421 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-qjrvb" Feb 03 00:34:39 crc kubenswrapper[4798]: I0203 00:34:39.689637 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-qjrvb" Feb 03 00:34:41 crc kubenswrapper[4798]: I0203 00:34:41.972085 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-qjrvb"] Feb 03 00:34:41 crc kubenswrapper[4798]: I0203 00:34:41.973038 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-qjrvb" podUID="4109a41f-95f4-4c6c-b074-8f3e98060edb" containerName="registry-server" containerID="cri-o://0e3d08d0d9fdb2137f9447d0d95eec20555ad757e5a6183e7d7db115ff2b0e7c" gracePeriod=2 Feb 03 00:34:42 crc kubenswrapper[4798]: I0203 00:34:42.318600 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-qjrvb" Feb 03 00:34:42 crc kubenswrapper[4798]: I0203 00:34:42.358477 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5n2qk\" (UniqueName: \"kubernetes.io/projected/4109a41f-95f4-4c6c-b074-8f3e98060edb-kube-api-access-5n2qk\") pod \"4109a41f-95f4-4c6c-b074-8f3e98060edb\" (UID: \"4109a41f-95f4-4c6c-b074-8f3e98060edb\") " Feb 03 00:34:42 crc kubenswrapper[4798]: I0203 00:34:42.364460 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4109a41f-95f4-4c6c-b074-8f3e98060edb-kube-api-access-5n2qk" (OuterVolumeSpecName: "kube-api-access-5n2qk") pod "4109a41f-95f4-4c6c-b074-8f3e98060edb" (UID: "4109a41f-95f4-4c6c-b074-8f3e98060edb"). InnerVolumeSpecName "kube-api-access-5n2qk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:34:42 crc kubenswrapper[4798]: I0203 00:34:42.460558 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5n2qk\" (UniqueName: \"kubernetes.io/projected/4109a41f-95f4-4c6c-b074-8f3e98060edb-kube-api-access-5n2qk\") on node \"crc\" DevicePath \"\"" Feb 03 00:34:42 crc kubenswrapper[4798]: I0203 00:34:42.674890 4798 generic.go:334] "Generic (PLEG): container finished" podID="4109a41f-95f4-4c6c-b074-8f3e98060edb" containerID="0e3d08d0d9fdb2137f9447d0d95eec20555ad757e5a6183e7d7db115ff2b0e7c" exitCode=0 Feb 03 00:34:42 crc kubenswrapper[4798]: I0203 00:34:42.674940 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-qjrvb" event={"ID":"4109a41f-95f4-4c6c-b074-8f3e98060edb","Type":"ContainerDied","Data":"0e3d08d0d9fdb2137f9447d0d95eec20555ad757e5a6183e7d7db115ff2b0e7c"} Feb 03 00:34:42 crc kubenswrapper[4798]: I0203 00:34:42.674970 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-qjrvb" event={"ID":"4109a41f-95f4-4c6c-b074-8f3e98060edb","Type":"ContainerDied","Data":"f134cc372eb93d5b6d1f8d70110165f77491cebcac5b887d8f5f7253cfc3318a"} Feb 03 00:34:42 crc kubenswrapper[4798]: I0203 00:34:42.674990 4798 scope.go:117] "RemoveContainer" containerID="0e3d08d0d9fdb2137f9447d0d95eec20555ad757e5a6183e7d7db115ff2b0e7c" Feb 03 00:34:42 crc kubenswrapper[4798]: I0203 00:34:42.674989 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-qjrvb" Feb 03 00:34:42 crc kubenswrapper[4798]: I0203 00:34:42.697869 4798 scope.go:117] "RemoveContainer" containerID="0e3d08d0d9fdb2137f9447d0d95eec20555ad757e5a6183e7d7db115ff2b0e7c" Feb 03 00:34:42 crc kubenswrapper[4798]: E0203 00:34:42.698559 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e3d08d0d9fdb2137f9447d0d95eec20555ad757e5a6183e7d7db115ff2b0e7c\": container with ID starting with 0e3d08d0d9fdb2137f9447d0d95eec20555ad757e5a6183e7d7db115ff2b0e7c not found: ID does not exist" containerID="0e3d08d0d9fdb2137f9447d0d95eec20555ad757e5a6183e7d7db115ff2b0e7c" Feb 03 00:34:42 crc kubenswrapper[4798]: I0203 00:34:42.698600 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e3d08d0d9fdb2137f9447d0d95eec20555ad757e5a6183e7d7db115ff2b0e7c"} err="failed to get container status \"0e3d08d0d9fdb2137f9447d0d95eec20555ad757e5a6183e7d7db115ff2b0e7c\": rpc error: code = NotFound desc = could not find container \"0e3d08d0d9fdb2137f9447d0d95eec20555ad757e5a6183e7d7db115ff2b0e7c\": container with ID starting with 0e3d08d0d9fdb2137f9447d0d95eec20555ad757e5a6183e7d7db115ff2b0e7c not found: ID does not exist" Feb 03 00:34:42 crc kubenswrapper[4798]: I0203 00:34:42.717873 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-qjrvb"] Feb 03 00:34:42 crc kubenswrapper[4798]: I0203 00:34:42.734451 4798 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-qjrvb"] Feb 03 00:34:42 crc kubenswrapper[4798]: I0203 00:34:42.919214 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4109a41f-95f4-4c6c-b074-8f3e98060edb" path="/var/lib/kubelet/pods/4109a41f-95f4-4c6c-b074-8f3e98060edb/volumes" Feb 03 00:34:47 crc kubenswrapper[4798]: I0203 00:34:47.759957 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-55b89ddfb9-dq9bs_15f3e2a8-2d6f-47ec-aa00-9b67b995d64b/operator/0.log" Feb 03 00:34:50 crc kubenswrapper[4798]: I0203 00:34:50.085038 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bbbc889bc-rjnfk_c46c738a-1240-42e9-9d41-adab078b0a95/operator/0.log" Feb 03 00:34:50 crc kubenswrapper[4798]: I0203 00:34:50.371723 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_d4a7a6fe-905f-40e8-929d-07bd7e45d58e/qdr/0.log" Feb 03 00:35:13 crc kubenswrapper[4798]: I0203 00:35:13.867257 4798 patch_prober.go:28] interesting pod/machine-config-daemon-b842j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 00:35:13 crc kubenswrapper[4798]: I0203 00:35:13.867774 4798 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b842j" podUID="c6602c86-f236-4772-b70f-a8b4847b95dd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 00:35:14 crc kubenswrapper[4798]: I0203 00:35:14.376533 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-l297n/must-gather-8bcpf"] Feb 03 00:35:14 crc kubenswrapper[4798]: E0203 00:35:14.376818 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4109a41f-95f4-4c6c-b074-8f3e98060edb" containerName="registry-server" Feb 03 00:35:14 crc kubenswrapper[4798]: I0203 00:35:14.376833 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="4109a41f-95f4-4c6c-b074-8f3e98060edb" containerName="registry-server" Feb 03 00:35:14 crc kubenswrapper[4798]: I0203 00:35:14.376958 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="4109a41f-95f4-4c6c-b074-8f3e98060edb" containerName="registry-server" Feb 03 00:35:14 crc kubenswrapper[4798]: I0203 00:35:14.377610 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l297n/must-gather-8bcpf" Feb 03 00:35:14 crc kubenswrapper[4798]: I0203 00:35:14.383214 4798 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-l297n"/"default-dockercfg-cnbh9" Feb 03 00:35:14 crc kubenswrapper[4798]: I0203 00:35:14.383214 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-l297n"/"openshift-service-ca.crt" Feb 03 00:35:14 crc kubenswrapper[4798]: I0203 00:35:14.383787 4798 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-l297n"/"kube-root-ca.crt" Feb 03 00:35:14 crc kubenswrapper[4798]: I0203 00:35:14.388388 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-l297n/must-gather-8bcpf"] Feb 03 00:35:14 crc kubenswrapper[4798]: I0203 00:35:14.494717 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f9a1eae8-1f6c-4217-b4fe-128d9271c20b-must-gather-output\") pod \"must-gather-8bcpf\" (UID: \"f9a1eae8-1f6c-4217-b4fe-128d9271c20b\") " pod="openshift-must-gather-l297n/must-gather-8bcpf" Feb 03 00:35:14 crc kubenswrapper[4798]: I0203 00:35:14.494853 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvrxq\" (UniqueName: \"kubernetes.io/projected/f9a1eae8-1f6c-4217-b4fe-128d9271c20b-kube-api-access-wvrxq\") pod \"must-gather-8bcpf\" (UID: \"f9a1eae8-1f6c-4217-b4fe-128d9271c20b\") " pod="openshift-must-gather-l297n/must-gather-8bcpf" Feb 03 00:35:14 crc kubenswrapper[4798]: I0203 00:35:14.595705 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvrxq\" (UniqueName: \"kubernetes.io/projected/f9a1eae8-1f6c-4217-b4fe-128d9271c20b-kube-api-access-wvrxq\") pod \"must-gather-8bcpf\" (UID: \"f9a1eae8-1f6c-4217-b4fe-128d9271c20b\") " pod="openshift-must-gather-l297n/must-gather-8bcpf" Feb 03 00:35:14 crc kubenswrapper[4798]: I0203 00:35:14.595762 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f9a1eae8-1f6c-4217-b4fe-128d9271c20b-must-gather-output\") pod \"must-gather-8bcpf\" (UID: \"f9a1eae8-1f6c-4217-b4fe-128d9271c20b\") " pod="openshift-must-gather-l297n/must-gather-8bcpf" Feb 03 00:35:14 crc kubenswrapper[4798]: I0203 00:35:14.596167 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f9a1eae8-1f6c-4217-b4fe-128d9271c20b-must-gather-output\") pod \"must-gather-8bcpf\" (UID: \"f9a1eae8-1f6c-4217-b4fe-128d9271c20b\") " pod="openshift-must-gather-l297n/must-gather-8bcpf" Feb 03 00:35:14 crc kubenswrapper[4798]: I0203 00:35:14.614871 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvrxq\" (UniqueName: \"kubernetes.io/projected/f9a1eae8-1f6c-4217-b4fe-128d9271c20b-kube-api-access-wvrxq\") pod \"must-gather-8bcpf\" (UID: \"f9a1eae8-1f6c-4217-b4fe-128d9271c20b\") " pod="openshift-must-gather-l297n/must-gather-8bcpf" Feb 03 00:35:14 crc kubenswrapper[4798]: I0203 00:35:14.696530 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l297n/must-gather-8bcpf" Feb 03 00:35:14 crc kubenswrapper[4798]: I0203 00:35:14.922009 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-l297n/must-gather-8bcpf"] Feb 03 00:35:14 crc kubenswrapper[4798]: I0203 00:35:14.932388 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l297n/must-gather-8bcpf" event={"ID":"f9a1eae8-1f6c-4217-b4fe-128d9271c20b","Type":"ContainerStarted","Data":"16b57af1a7f00a618dc73106a9ee17ac64dc42333850d199da2bb62e786e7972"} Feb 03 00:35:23 crc kubenswrapper[4798]: I0203 00:35:23.003816 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l297n/must-gather-8bcpf" event={"ID":"f9a1eae8-1f6c-4217-b4fe-128d9271c20b","Type":"ContainerStarted","Data":"fbdf381772ab2af653ec5191939026a01be5a8ec3a944963c520808ba3c9bc55"} Feb 03 00:35:24 crc kubenswrapper[4798]: I0203 00:35:24.012441 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l297n/must-gather-8bcpf" event={"ID":"f9a1eae8-1f6c-4217-b4fe-128d9271c20b","Type":"ContainerStarted","Data":"b41cd21f8333e8c113265014e6906d4c2c37f3d020f56a248b30f5f50c664a8b"} Feb 03 00:35:24 crc kubenswrapper[4798]: I0203 00:35:24.035478 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-l297n/must-gather-8bcpf" podStartSLOduration=2.298998729 podStartE2EDuration="10.035442118s" podCreationTimestamp="2026-02-03 00:35:14 +0000 UTC" firstStartedPulling="2026-02-03 00:35:14.912734196 +0000 UTC m=+1206.678724207" lastFinishedPulling="2026-02-03 00:35:22.649177555 +0000 UTC m=+1214.415167596" observedRunningTime="2026-02-03 00:35:24.024937689 +0000 UTC m=+1215.790927720" watchObservedRunningTime="2026-02-03 00:35:24.035442118 +0000 UTC m=+1215.801432169" Feb 03 00:35:43 crc kubenswrapper[4798]: I0203 00:35:43.866877 4798 patch_prober.go:28] interesting pod/machine-config-daemon-b842j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 00:35:43 crc kubenswrapper[4798]: I0203 00:35:43.867380 4798 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b842j" podUID="c6602c86-f236-4772-b70f-a8b4847b95dd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 00:36:05 crc kubenswrapper[4798]: I0203 00:36:05.153116 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-t2s9r_67a133bc-8ad5-4088-b5ec-122ec4f32c4d/control-plane-machine-set-operator/0.log" Feb 03 00:36:05 crc kubenswrapper[4798]: I0203 00:36:05.322179 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-8tbx2_ed0d46a1-22a9-46aa-b0d8-c65624861d9a/machine-api-operator/0.log" Feb 03 00:36:05 crc kubenswrapper[4798]: I0203 00:36:05.341418 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-8tbx2_ed0d46a1-22a9-46aa-b0d8-c65624861d9a/kube-rbac-proxy/0.log" Feb 03 00:36:13 crc kubenswrapper[4798]: I0203 00:36:13.866909 4798 patch_prober.go:28] interesting pod/machine-config-daemon-b842j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 00:36:13 crc kubenswrapper[4798]: I0203 00:36:13.867673 4798 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b842j" podUID="c6602c86-f236-4772-b70f-a8b4847b95dd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 00:36:13 crc kubenswrapper[4798]: I0203 00:36:13.867748 4798 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b842j" Feb 03 00:36:13 crc kubenswrapper[4798]: I0203 00:36:13.868647 4798 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1654be6f1eed76bb7fed106076af4b53b33d7dbb9520c162c350851dcffaa681"} pod="openshift-machine-config-operator/machine-config-daemon-b842j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 00:36:13 crc kubenswrapper[4798]: I0203 00:36:13.868787 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b842j" podUID="c6602c86-f236-4772-b70f-a8b4847b95dd" containerName="machine-config-daemon" containerID="cri-o://1654be6f1eed76bb7fed106076af4b53b33d7dbb9520c162c350851dcffaa681" gracePeriod=600 Feb 03 00:36:14 crc kubenswrapper[4798]: I0203 00:36:14.384403 4798 generic.go:334] "Generic (PLEG): container finished" podID="c6602c86-f236-4772-b70f-a8b4847b95dd" containerID="1654be6f1eed76bb7fed106076af4b53b33d7dbb9520c162c350851dcffaa681" exitCode=0 Feb 03 00:36:14 crc kubenswrapper[4798]: I0203 00:36:14.384478 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" event={"ID":"c6602c86-f236-4772-b70f-a8b4847b95dd","Type":"ContainerDied","Data":"1654be6f1eed76bb7fed106076af4b53b33d7dbb9520c162c350851dcffaa681"} Feb 03 00:36:14 crc kubenswrapper[4798]: I0203 00:36:14.384743 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" event={"ID":"c6602c86-f236-4772-b70f-a8b4847b95dd","Type":"ContainerStarted","Data":"9fba349678c80452f53cc5ad9fdcb4b9593fa0f9c61d8a099cc4cac155e9fc3e"} Feb 03 00:36:14 crc kubenswrapper[4798]: I0203 00:36:14.384766 4798 scope.go:117] "RemoveContainer" containerID="efe22b7553ea1789aeef45bd5caacf46be6b6cbd47032c443a4e9ad6640e0650" Feb 03 00:36:17 crc kubenswrapper[4798]: I0203 00:36:17.384961 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-j9bhv_e0434cd3-e97a-4434-919d-258cf482b4cb/cert-manager-controller/0.log" Feb 03 00:36:17 crc kubenswrapper[4798]: I0203 00:36:17.519348 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-qm6qm_cdb1b01d-b0e6-411b-98d4-794ff69012ed/cert-manager-cainjector/0.log" Feb 03 00:36:17 crc kubenswrapper[4798]: I0203 00:36:17.552844 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-l5zfv_106aaf86-eb7b-4039-a9f8-24107a7d0f0b/cert-manager-webhook/0.log" Feb 03 00:36:30 crc kubenswrapper[4798]: I0203 00:36:30.809301 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-tp6c4_2c3d3d69-dea0-40fb-9d55-252fb8b34c1c/prometheus-operator/0.log" Feb 03 00:36:31 crc kubenswrapper[4798]: I0203 00:36:31.063804 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-8cc945bb6-42k7l_28f1b740-046a-47f9-9257-62e739509702/prometheus-operator-admission-webhook/0.log" Feb 03 00:36:31 crc kubenswrapper[4798]: I0203 00:36:31.139027 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-8cc945bb6-bm7fg_7ac09692-d88a-47d1-b4e6-05cc3020ebf1/prometheus-operator-admission-webhook/0.log" Feb 03 00:36:31 crc kubenswrapper[4798]: I0203 00:36:31.299224 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-wpfwr_b25c3b0d-a5bd-4d7b-bab4-dc475d1c4deb/operator/0.log" Feb 03 00:36:31 crc kubenswrapper[4798]: I0203 00:36:31.323008 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-dlssk_4e8887da-9f98-4a78-a5df-756bf2d2d31e/perses-operator/0.log" Feb 03 00:36:44 crc kubenswrapper[4798]: I0203 00:36:44.786322 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fk54w7_1d3093ff-b293-4d02-9db9-724830306c14/util/0.log" Feb 03 00:36:44 crc kubenswrapper[4798]: I0203 00:36:44.950529 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fk54w7_1d3093ff-b293-4d02-9db9-724830306c14/pull/0.log" Feb 03 00:36:44 crc kubenswrapper[4798]: I0203 00:36:44.961689 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fk54w7_1d3093ff-b293-4d02-9db9-724830306c14/pull/0.log" Feb 03 00:36:44 crc kubenswrapper[4798]: I0203 00:36:44.987809 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fk54w7_1d3093ff-b293-4d02-9db9-724830306c14/util/0.log" Feb 03 00:36:45 crc kubenswrapper[4798]: I0203 00:36:45.122054 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fk54w7_1d3093ff-b293-4d02-9db9-724830306c14/pull/0.log" Feb 03 00:36:45 crc kubenswrapper[4798]: I0203 00:36:45.128103 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fk54w7_1d3093ff-b293-4d02-9db9-724830306c14/extract/0.log" Feb 03 00:36:45 crc kubenswrapper[4798]: I0203 00:36:45.148817 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fk54w7_1d3093ff-b293-4d02-9db9-724830306c14/util/0.log" Feb 03 00:36:45 crc kubenswrapper[4798]: I0203 00:36:45.269068 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqfpwd_872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c/util/0.log" Feb 03 00:36:45 crc kubenswrapper[4798]: I0203 00:36:45.429802 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqfpwd_872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c/util/0.log" Feb 03 00:36:45 crc kubenswrapper[4798]: I0203 00:36:45.431802 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqfpwd_872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c/pull/0.log" Feb 03 00:36:45 crc kubenswrapper[4798]: I0203 00:36:45.455610 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqfpwd_872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c/pull/0.log" Feb 03 00:36:45 crc kubenswrapper[4798]: I0203 00:36:45.610340 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqfpwd_872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c/pull/0.log" Feb 03 00:36:45 crc kubenswrapper[4798]: I0203 00:36:45.618777 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqfpwd_872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c/extract/0.log" Feb 03 00:36:45 crc kubenswrapper[4798]: I0203 00:36:45.673586 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqfpwd_872f0c5a-672d-4f2a-a4ce-ac6db6a8c75c/util/0.log" Feb 03 00:36:45 crc kubenswrapper[4798]: I0203 00:36:45.793709 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55bgzx_71af01f0-a9e8-4957-a949-22963b9fa386/util/0.log" Feb 03 00:36:45 crc kubenswrapper[4798]: I0203 00:36:45.902560 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55bgzx_71af01f0-a9e8-4957-a949-22963b9fa386/util/0.log" Feb 03 00:36:45 crc kubenswrapper[4798]: I0203 00:36:45.970212 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55bgzx_71af01f0-a9e8-4957-a949-22963b9fa386/pull/0.log" Feb 03 00:36:45 crc kubenswrapper[4798]: I0203 00:36:45.973404 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55bgzx_71af01f0-a9e8-4957-a949-22963b9fa386/pull/0.log" Feb 03 00:36:46 crc kubenswrapper[4798]: I0203 00:36:46.128378 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55bgzx_71af01f0-a9e8-4957-a949-22963b9fa386/util/0.log" Feb 03 00:36:46 crc kubenswrapper[4798]: I0203 00:36:46.130596 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55bgzx_71af01f0-a9e8-4957-a949-22963b9fa386/pull/0.log" Feb 03 00:36:46 crc kubenswrapper[4798]: I0203 00:36:46.159298 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55bgzx_71af01f0-a9e8-4957-a949-22963b9fa386/extract/0.log" Feb 03 00:36:46 crc kubenswrapper[4798]: I0203 00:36:46.316283 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08b7l4l_2c793aa4-86ea-41a2-875f-a944cfcb19b9/util/0.log" Feb 03 00:36:46 crc kubenswrapper[4798]: I0203 00:36:46.446174 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08b7l4l_2c793aa4-86ea-41a2-875f-a944cfcb19b9/util/0.log" Feb 03 00:36:46 crc kubenswrapper[4798]: I0203 00:36:46.452872 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08b7l4l_2c793aa4-86ea-41a2-875f-a944cfcb19b9/pull/0.log" Feb 03 00:36:46 crc kubenswrapper[4798]: I0203 00:36:46.481490 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08b7l4l_2c793aa4-86ea-41a2-875f-a944cfcb19b9/pull/0.log" Feb 03 00:36:46 crc kubenswrapper[4798]: I0203 00:36:46.605758 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08b7l4l_2c793aa4-86ea-41a2-875f-a944cfcb19b9/util/0.log" Feb 03 00:36:46 crc kubenswrapper[4798]: I0203 00:36:46.614667 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08b7l4l_2c793aa4-86ea-41a2-875f-a944cfcb19b9/extract/0.log" Feb 03 00:36:46 crc kubenswrapper[4798]: I0203 00:36:46.650713 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08b7l4l_2c793aa4-86ea-41a2-875f-a944cfcb19b9/pull/0.log" Feb 03 00:36:46 crc kubenswrapper[4798]: I0203 00:36:46.762829 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-85k55_35538c6f-6fc5-4f9b-ab5c-88d343643ccd/extract-utilities/0.log" Feb 03 00:36:46 crc kubenswrapper[4798]: I0203 00:36:46.940768 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-85k55_35538c6f-6fc5-4f9b-ab5c-88d343643ccd/extract-utilities/0.log" Feb 03 00:36:46 crc kubenswrapper[4798]: I0203 00:36:46.975624 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-85k55_35538c6f-6fc5-4f9b-ab5c-88d343643ccd/extract-content/0.log" Feb 03 00:36:46 crc kubenswrapper[4798]: I0203 00:36:46.978892 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-85k55_35538c6f-6fc5-4f9b-ab5c-88d343643ccd/extract-content/0.log" Feb 03 00:36:47 crc kubenswrapper[4798]: I0203 00:36:47.126551 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-85k55_35538c6f-6fc5-4f9b-ab5c-88d343643ccd/extract-content/0.log" Feb 03 00:36:47 crc kubenswrapper[4798]: I0203 00:36:47.196235 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-85k55_35538c6f-6fc5-4f9b-ab5c-88d343643ccd/extract-utilities/0.log" Feb 03 00:36:47 crc kubenswrapper[4798]: I0203 00:36:47.393243 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jhwdd_7d03cb63-eab3-4c46-b6fa-ef7130ac7e72/extract-utilities/0.log" Feb 03 00:36:47 crc kubenswrapper[4798]: I0203 00:36:47.397029 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-85k55_35538c6f-6fc5-4f9b-ab5c-88d343643ccd/registry-server/0.log" Feb 03 00:36:47 crc kubenswrapper[4798]: I0203 00:36:47.559014 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jhwdd_7d03cb63-eab3-4c46-b6fa-ef7130ac7e72/extract-utilities/0.log" Feb 03 00:36:47 crc kubenswrapper[4798]: I0203 00:36:47.586876 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jhwdd_7d03cb63-eab3-4c46-b6fa-ef7130ac7e72/extract-content/0.log" Feb 03 00:36:47 crc kubenswrapper[4798]: I0203 00:36:47.589372 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jhwdd_7d03cb63-eab3-4c46-b6fa-ef7130ac7e72/extract-content/0.log" Feb 03 00:36:47 crc kubenswrapper[4798]: I0203 00:36:47.740913 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jhwdd_7d03cb63-eab3-4c46-b6fa-ef7130ac7e72/extract-utilities/0.log" Feb 03 00:36:47 crc kubenswrapper[4798]: I0203 00:36:47.768138 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jhwdd_7d03cb63-eab3-4c46-b6fa-ef7130ac7e72/extract-content/0.log" Feb 03 00:36:47 crc kubenswrapper[4798]: I0203 00:36:47.931451 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-2zlmb_f6708d80-8899-4ef9-a0aa-0acb736f01ed/marketplace-operator/0.log" Feb 03 00:36:47 crc kubenswrapper[4798]: I0203 00:36:47.940870 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jhwdd_7d03cb63-eab3-4c46-b6fa-ef7130ac7e72/registry-server/0.log" Feb 03 00:36:48 crc kubenswrapper[4798]: I0203 00:36:48.061306 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z8bgg_05b13579-cc4d-48e7-87f2-814f2adf2fdd/extract-utilities/0.log" Feb 03 00:36:48 crc kubenswrapper[4798]: I0203 00:36:48.251795 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z8bgg_05b13579-cc4d-48e7-87f2-814f2adf2fdd/extract-content/0.log" Feb 03 00:36:48 crc kubenswrapper[4798]: I0203 00:36:48.291811 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z8bgg_05b13579-cc4d-48e7-87f2-814f2adf2fdd/extract-utilities/0.log" Feb 03 00:36:48 crc kubenswrapper[4798]: I0203 00:36:48.305578 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z8bgg_05b13579-cc4d-48e7-87f2-814f2adf2fdd/extract-content/0.log" Feb 03 00:36:48 crc kubenswrapper[4798]: I0203 00:36:48.394683 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z8bgg_05b13579-cc4d-48e7-87f2-814f2adf2fdd/extract-utilities/0.log" Feb 03 00:36:48 crc kubenswrapper[4798]: I0203 00:36:48.446834 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z8bgg_05b13579-cc4d-48e7-87f2-814f2adf2fdd/extract-content/0.log" Feb 03 00:36:48 crc kubenswrapper[4798]: I0203 00:36:48.616600 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z8bgg_05b13579-cc4d-48e7-87f2-814f2adf2fdd/registry-server/0.log" Feb 03 00:36:59 crc kubenswrapper[4798]: I0203 00:36:59.656460 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-8cc945bb6-42k7l_28f1b740-046a-47f9-9257-62e739509702/prometheus-operator-admission-webhook/0.log" Feb 03 00:36:59 crc kubenswrapper[4798]: I0203 00:36:59.731771 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-tp6c4_2c3d3d69-dea0-40fb-9d55-252fb8b34c1c/prometheus-operator/0.log" Feb 03 00:36:59 crc kubenswrapper[4798]: I0203 00:36:59.733392 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-8cc945bb6-bm7fg_7ac09692-d88a-47d1-b4e6-05cc3020ebf1/prometheus-operator-admission-webhook/0.log" Feb 03 00:36:59 crc kubenswrapper[4798]: I0203 00:36:59.934334 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-wpfwr_b25c3b0d-a5bd-4d7b-bab4-dc475d1c4deb/operator/0.log" Feb 03 00:36:59 crc kubenswrapper[4798]: I0203 00:36:59.986440 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-dlssk_4e8887da-9f98-4a78-a5df-756bf2d2d31e/perses-operator/0.log" Feb 03 00:37:51 crc kubenswrapper[4798]: I0203 00:37:51.137084 4798 generic.go:334] "Generic (PLEG): container finished" podID="f9a1eae8-1f6c-4217-b4fe-128d9271c20b" containerID="fbdf381772ab2af653ec5191939026a01be5a8ec3a944963c520808ba3c9bc55" exitCode=0 Feb 03 00:37:51 crc kubenswrapper[4798]: I0203 00:37:51.137231 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l297n/must-gather-8bcpf" event={"ID":"f9a1eae8-1f6c-4217-b4fe-128d9271c20b","Type":"ContainerDied","Data":"fbdf381772ab2af653ec5191939026a01be5a8ec3a944963c520808ba3c9bc55"} Feb 03 00:37:51 crc kubenswrapper[4798]: I0203 00:37:51.139345 4798 scope.go:117] "RemoveContainer" containerID="fbdf381772ab2af653ec5191939026a01be5a8ec3a944963c520808ba3c9bc55" Feb 03 00:37:51 crc kubenswrapper[4798]: I0203 00:37:51.868282 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-l297n_must-gather-8bcpf_f9a1eae8-1f6c-4217-b4fe-128d9271c20b/gather/0.log" Feb 03 00:37:58 crc kubenswrapper[4798]: I0203 00:37:58.629430 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-l297n/must-gather-8bcpf"] Feb 03 00:37:58 crc kubenswrapper[4798]: I0203 00:37:58.630320 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-l297n/must-gather-8bcpf" podUID="f9a1eae8-1f6c-4217-b4fe-128d9271c20b" containerName="copy" containerID="cri-o://b41cd21f8333e8c113265014e6906d4c2c37f3d020f56a248b30f5f50c664a8b" gracePeriod=2 Feb 03 00:37:58 crc kubenswrapper[4798]: I0203 00:37:58.639959 4798 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-l297n/must-gather-8bcpf"] Feb 03 00:37:59 crc kubenswrapper[4798]: I0203 00:37:59.043713 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-l297n_must-gather-8bcpf_f9a1eae8-1f6c-4217-b4fe-128d9271c20b/copy/0.log" Feb 03 00:37:59 crc kubenswrapper[4798]: I0203 00:37:59.044247 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l297n/must-gather-8bcpf" Feb 03 00:37:59 crc kubenswrapper[4798]: I0203 00:37:59.135361 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvrxq\" (UniqueName: \"kubernetes.io/projected/f9a1eae8-1f6c-4217-b4fe-128d9271c20b-kube-api-access-wvrxq\") pod \"f9a1eae8-1f6c-4217-b4fe-128d9271c20b\" (UID: \"f9a1eae8-1f6c-4217-b4fe-128d9271c20b\") " Feb 03 00:37:59 crc kubenswrapper[4798]: I0203 00:37:59.135621 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f9a1eae8-1f6c-4217-b4fe-128d9271c20b-must-gather-output\") pod \"f9a1eae8-1f6c-4217-b4fe-128d9271c20b\" (UID: \"f9a1eae8-1f6c-4217-b4fe-128d9271c20b\") " Feb 03 00:37:59 crc kubenswrapper[4798]: I0203 00:37:59.140926 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9a1eae8-1f6c-4217-b4fe-128d9271c20b-kube-api-access-wvrxq" (OuterVolumeSpecName: "kube-api-access-wvrxq") pod "f9a1eae8-1f6c-4217-b4fe-128d9271c20b" (UID: "f9a1eae8-1f6c-4217-b4fe-128d9271c20b"). InnerVolumeSpecName "kube-api-access-wvrxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:37:59 crc kubenswrapper[4798]: I0203 00:37:59.189443 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9a1eae8-1f6c-4217-b4fe-128d9271c20b-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "f9a1eae8-1f6c-4217-b4fe-128d9271c20b" (UID: "f9a1eae8-1f6c-4217-b4fe-128d9271c20b"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:37:59 crc kubenswrapper[4798]: I0203 00:37:59.213416 4798 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-l297n_must-gather-8bcpf_f9a1eae8-1f6c-4217-b4fe-128d9271c20b/copy/0.log" Feb 03 00:37:59 crc kubenswrapper[4798]: I0203 00:37:59.213835 4798 generic.go:334] "Generic (PLEG): container finished" podID="f9a1eae8-1f6c-4217-b4fe-128d9271c20b" containerID="b41cd21f8333e8c113265014e6906d4c2c37f3d020f56a248b30f5f50c664a8b" exitCode=143 Feb 03 00:37:59 crc kubenswrapper[4798]: I0203 00:37:59.213885 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l297n/must-gather-8bcpf" Feb 03 00:37:59 crc kubenswrapper[4798]: I0203 00:37:59.213898 4798 scope.go:117] "RemoveContainer" containerID="b41cd21f8333e8c113265014e6906d4c2c37f3d020f56a248b30f5f50c664a8b" Feb 03 00:37:59 crc kubenswrapper[4798]: I0203 00:37:59.231820 4798 scope.go:117] "RemoveContainer" containerID="fbdf381772ab2af653ec5191939026a01be5a8ec3a944963c520808ba3c9bc55" Feb 03 00:37:59 crc kubenswrapper[4798]: I0203 00:37:59.237100 4798 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f9a1eae8-1f6c-4217-b4fe-128d9271c20b-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 03 00:37:59 crc kubenswrapper[4798]: I0203 00:37:59.237128 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvrxq\" (UniqueName: \"kubernetes.io/projected/f9a1eae8-1f6c-4217-b4fe-128d9271c20b-kube-api-access-wvrxq\") on node \"crc\" DevicePath \"\"" Feb 03 00:37:59 crc kubenswrapper[4798]: I0203 00:37:59.280865 4798 scope.go:117] "RemoveContainer" containerID="b41cd21f8333e8c113265014e6906d4c2c37f3d020f56a248b30f5f50c664a8b" Feb 03 00:37:59 crc kubenswrapper[4798]: E0203 00:37:59.281294 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b41cd21f8333e8c113265014e6906d4c2c37f3d020f56a248b30f5f50c664a8b\": container with ID starting with b41cd21f8333e8c113265014e6906d4c2c37f3d020f56a248b30f5f50c664a8b not found: ID does not exist" containerID="b41cd21f8333e8c113265014e6906d4c2c37f3d020f56a248b30f5f50c664a8b" Feb 03 00:37:59 crc kubenswrapper[4798]: I0203 00:37:59.281328 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b41cd21f8333e8c113265014e6906d4c2c37f3d020f56a248b30f5f50c664a8b"} err="failed to get container status \"b41cd21f8333e8c113265014e6906d4c2c37f3d020f56a248b30f5f50c664a8b\": rpc error: code = NotFound desc = could not find container \"b41cd21f8333e8c113265014e6906d4c2c37f3d020f56a248b30f5f50c664a8b\": container with ID starting with b41cd21f8333e8c113265014e6906d4c2c37f3d020f56a248b30f5f50c664a8b not found: ID does not exist" Feb 03 00:37:59 crc kubenswrapper[4798]: I0203 00:37:59.281359 4798 scope.go:117] "RemoveContainer" containerID="fbdf381772ab2af653ec5191939026a01be5a8ec3a944963c520808ba3c9bc55" Feb 03 00:37:59 crc kubenswrapper[4798]: E0203 00:37:59.283156 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbdf381772ab2af653ec5191939026a01be5a8ec3a944963c520808ba3c9bc55\": container with ID starting with fbdf381772ab2af653ec5191939026a01be5a8ec3a944963c520808ba3c9bc55 not found: ID does not exist" containerID="fbdf381772ab2af653ec5191939026a01be5a8ec3a944963c520808ba3c9bc55" Feb 03 00:37:59 crc kubenswrapper[4798]: I0203 00:37:59.283206 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbdf381772ab2af653ec5191939026a01be5a8ec3a944963c520808ba3c9bc55"} err="failed to get container status \"fbdf381772ab2af653ec5191939026a01be5a8ec3a944963c520808ba3c9bc55\": rpc error: code = NotFound desc = could not find container \"fbdf381772ab2af653ec5191939026a01be5a8ec3a944963c520808ba3c9bc55\": container with ID starting with fbdf381772ab2af653ec5191939026a01be5a8ec3a944963c520808ba3c9bc55 not found: ID does not exist" Feb 03 00:38:00 crc kubenswrapper[4798]: I0203 00:38:00.917896 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9a1eae8-1f6c-4217-b4fe-128d9271c20b" path="/var/lib/kubelet/pods/f9a1eae8-1f6c-4217-b4fe-128d9271c20b/volumes" Feb 03 00:38:43 crc kubenswrapper[4798]: I0203 00:38:43.867278 4798 patch_prober.go:28] interesting pod/machine-config-daemon-b842j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 00:38:43 crc kubenswrapper[4798]: I0203 00:38:43.868071 4798 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b842j" podUID="c6602c86-f236-4772-b70f-a8b4847b95dd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 00:39:13 crc kubenswrapper[4798]: I0203 00:39:13.866880 4798 patch_prober.go:28] interesting pod/machine-config-daemon-b842j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 00:39:13 crc kubenswrapper[4798]: I0203 00:39:13.867428 4798 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b842j" podUID="c6602c86-f236-4772-b70f-a8b4847b95dd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 00:39:43 crc kubenswrapper[4798]: I0203 00:39:43.867309 4798 patch_prober.go:28] interesting pod/machine-config-daemon-b842j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 00:39:43 crc kubenswrapper[4798]: I0203 00:39:43.868273 4798 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b842j" podUID="c6602c86-f236-4772-b70f-a8b4847b95dd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 00:39:43 crc kubenswrapper[4798]: I0203 00:39:43.868339 4798 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b842j" Feb 03 00:39:43 crc kubenswrapper[4798]: I0203 00:39:43.869205 4798 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9fba349678c80452f53cc5ad9fdcb4b9593fa0f9c61d8a099cc4cac155e9fc3e"} pod="openshift-machine-config-operator/machine-config-daemon-b842j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 00:39:43 crc kubenswrapper[4798]: I0203 00:39:43.869303 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b842j" podUID="c6602c86-f236-4772-b70f-a8b4847b95dd" containerName="machine-config-daemon" containerID="cri-o://9fba349678c80452f53cc5ad9fdcb4b9593fa0f9c61d8a099cc4cac155e9fc3e" gracePeriod=600 Feb 03 00:39:44 crc kubenswrapper[4798]: I0203 00:39:44.116629 4798 generic.go:334] "Generic (PLEG): container finished" podID="c6602c86-f236-4772-b70f-a8b4847b95dd" containerID="9fba349678c80452f53cc5ad9fdcb4b9593fa0f9c61d8a099cc4cac155e9fc3e" exitCode=0 Feb 03 00:39:44 crc kubenswrapper[4798]: I0203 00:39:44.116700 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" event={"ID":"c6602c86-f236-4772-b70f-a8b4847b95dd","Type":"ContainerDied","Data":"9fba349678c80452f53cc5ad9fdcb4b9593fa0f9c61d8a099cc4cac155e9fc3e"} Feb 03 00:39:44 crc kubenswrapper[4798]: I0203 00:39:44.117004 4798 scope.go:117] "RemoveContainer" containerID="1654be6f1eed76bb7fed106076af4b53b33d7dbb9520c162c350851dcffaa681" Feb 03 00:39:45 crc kubenswrapper[4798]: I0203 00:39:45.126706 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b842j" event={"ID":"c6602c86-f236-4772-b70f-a8b4847b95dd","Type":"ContainerStarted","Data":"e6a68c7684b7e65f930c71ed9d93256d5cf528ec30675b80f4d2f3675b7af99b"} Feb 03 00:39:52 crc kubenswrapper[4798]: I0203 00:39:52.472843 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-7n2jm"] Feb 03 00:39:52 crc kubenswrapper[4798]: E0203 00:39:52.474045 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9a1eae8-1f6c-4217-b4fe-128d9271c20b" containerName="gather" Feb 03 00:39:52 crc kubenswrapper[4798]: I0203 00:39:52.474073 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9a1eae8-1f6c-4217-b4fe-128d9271c20b" containerName="gather" Feb 03 00:39:52 crc kubenswrapper[4798]: E0203 00:39:52.474121 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9a1eae8-1f6c-4217-b4fe-128d9271c20b" containerName="copy" Feb 03 00:39:52 crc kubenswrapper[4798]: I0203 00:39:52.474133 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9a1eae8-1f6c-4217-b4fe-128d9271c20b" containerName="copy" Feb 03 00:39:52 crc kubenswrapper[4798]: I0203 00:39:52.474371 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9a1eae8-1f6c-4217-b4fe-128d9271c20b" containerName="gather" Feb 03 00:39:52 crc kubenswrapper[4798]: I0203 00:39:52.474392 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9a1eae8-1f6c-4217-b4fe-128d9271c20b" containerName="copy" Feb 03 00:39:52 crc kubenswrapper[4798]: I0203 00:39:52.475150 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-7n2jm" Feb 03 00:39:52 crc kubenswrapper[4798]: I0203 00:39:52.484705 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-7n2jm"] Feb 03 00:39:52 crc kubenswrapper[4798]: I0203 00:39:52.599607 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx4bg\" (UniqueName: \"kubernetes.io/projected/84948ae5-1010-43d2-8d49-a8526076252d-kube-api-access-lx4bg\") pod \"infrawatch-operators-7n2jm\" (UID: \"84948ae5-1010-43d2-8d49-a8526076252d\") " pod="service-telemetry/infrawatch-operators-7n2jm" Feb 03 00:39:52 crc kubenswrapper[4798]: I0203 00:39:52.703706 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx4bg\" (UniqueName: \"kubernetes.io/projected/84948ae5-1010-43d2-8d49-a8526076252d-kube-api-access-lx4bg\") pod \"infrawatch-operators-7n2jm\" (UID: \"84948ae5-1010-43d2-8d49-a8526076252d\") " pod="service-telemetry/infrawatch-operators-7n2jm" Feb 03 00:39:52 crc kubenswrapper[4798]: I0203 00:39:52.744646 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx4bg\" (UniqueName: \"kubernetes.io/projected/84948ae5-1010-43d2-8d49-a8526076252d-kube-api-access-lx4bg\") pod \"infrawatch-operators-7n2jm\" (UID: \"84948ae5-1010-43d2-8d49-a8526076252d\") " pod="service-telemetry/infrawatch-operators-7n2jm" Feb 03 00:39:52 crc kubenswrapper[4798]: I0203 00:39:52.807944 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-7n2jm" Feb 03 00:39:53 crc kubenswrapper[4798]: I0203 00:39:53.086005 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-7n2jm"] Feb 03 00:39:53 crc kubenswrapper[4798]: W0203 00:39:53.095921 4798 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84948ae5_1010_43d2_8d49_a8526076252d.slice/crio-68e2caf115a25611fd3599ac027650268c5ed541700f2f76a4925600441b824a WatchSource:0}: Error finding container 68e2caf115a25611fd3599ac027650268c5ed541700f2f76a4925600441b824a: Status 404 returned error can't find the container with id 68e2caf115a25611fd3599ac027650268c5ed541700f2f76a4925600441b824a Feb 03 00:39:53 crc kubenswrapper[4798]: I0203 00:39:53.098730 4798 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 00:39:53 crc kubenswrapper[4798]: I0203 00:39:53.183180 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-7n2jm" event={"ID":"84948ae5-1010-43d2-8d49-a8526076252d","Type":"ContainerStarted","Data":"68e2caf115a25611fd3599ac027650268c5ed541700f2f76a4925600441b824a"} Feb 03 00:39:54 crc kubenswrapper[4798]: I0203 00:39:54.190737 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-7n2jm" event={"ID":"84948ae5-1010-43d2-8d49-a8526076252d","Type":"ContainerStarted","Data":"0d2fb2d2e7170b869343d27211ca7b23c3ec670302a02b817afbcec3f94a5e90"} Feb 03 00:40:02 crc kubenswrapper[4798]: I0203 00:40:02.808438 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/infrawatch-operators-7n2jm" Feb 03 00:40:02 crc kubenswrapper[4798]: I0203 00:40:02.810569 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-7n2jm" Feb 03 00:40:02 crc kubenswrapper[4798]: I0203 00:40:02.843506 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-7n2jm" Feb 03 00:40:02 crc kubenswrapper[4798]: I0203 00:40:02.860230 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-7n2jm" podStartSLOduration=10.749938103 podStartE2EDuration="10.860213196s" podCreationTimestamp="2026-02-03 00:39:52 +0000 UTC" firstStartedPulling="2026-02-03 00:39:53.09829452 +0000 UTC m=+1484.864284531" lastFinishedPulling="2026-02-03 00:39:53.208569613 +0000 UTC m=+1484.974559624" observedRunningTime="2026-02-03 00:39:54.218358355 +0000 UTC m=+1485.984348406" watchObservedRunningTime="2026-02-03 00:40:02.860213196 +0000 UTC m=+1494.626203207" Feb 03 00:40:03 crc kubenswrapper[4798]: I0203 00:40:03.305135 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-7n2jm" Feb 03 00:40:03 crc kubenswrapper[4798]: I0203 00:40:03.356549 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-7n2jm"] Feb 03 00:40:05 crc kubenswrapper[4798]: I0203 00:40:05.283048 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-7n2jm" podUID="84948ae5-1010-43d2-8d49-a8526076252d" containerName="registry-server" containerID="cri-o://0d2fb2d2e7170b869343d27211ca7b23c3ec670302a02b817afbcec3f94a5e90" gracePeriod=2 Feb 03 00:40:06 crc kubenswrapper[4798]: I0203 00:40:06.243078 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-7n2jm" Feb 03 00:40:06 crc kubenswrapper[4798]: I0203 00:40:06.291122 4798 generic.go:334] "Generic (PLEG): container finished" podID="84948ae5-1010-43d2-8d49-a8526076252d" containerID="0d2fb2d2e7170b869343d27211ca7b23c3ec670302a02b817afbcec3f94a5e90" exitCode=0 Feb 03 00:40:06 crc kubenswrapper[4798]: I0203 00:40:06.291170 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-7n2jm" event={"ID":"84948ae5-1010-43d2-8d49-a8526076252d","Type":"ContainerDied","Data":"0d2fb2d2e7170b869343d27211ca7b23c3ec670302a02b817afbcec3f94a5e90"} Feb 03 00:40:06 crc kubenswrapper[4798]: I0203 00:40:06.291177 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-7n2jm" Feb 03 00:40:06 crc kubenswrapper[4798]: I0203 00:40:06.291198 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-7n2jm" event={"ID":"84948ae5-1010-43d2-8d49-a8526076252d","Type":"ContainerDied","Data":"68e2caf115a25611fd3599ac027650268c5ed541700f2f76a4925600441b824a"} Feb 03 00:40:06 crc kubenswrapper[4798]: I0203 00:40:06.291216 4798 scope.go:117] "RemoveContainer" containerID="0d2fb2d2e7170b869343d27211ca7b23c3ec670302a02b817afbcec3f94a5e90" Feb 03 00:40:06 crc kubenswrapper[4798]: I0203 00:40:06.313479 4798 scope.go:117] "RemoveContainer" containerID="0d2fb2d2e7170b869343d27211ca7b23c3ec670302a02b817afbcec3f94a5e90" Feb 03 00:40:06 crc kubenswrapper[4798]: E0203 00:40:06.314025 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d2fb2d2e7170b869343d27211ca7b23c3ec670302a02b817afbcec3f94a5e90\": container with ID starting with 0d2fb2d2e7170b869343d27211ca7b23c3ec670302a02b817afbcec3f94a5e90 not found: ID does not exist" containerID="0d2fb2d2e7170b869343d27211ca7b23c3ec670302a02b817afbcec3f94a5e90" Feb 03 00:40:06 crc kubenswrapper[4798]: I0203 00:40:06.314105 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d2fb2d2e7170b869343d27211ca7b23c3ec670302a02b817afbcec3f94a5e90"} err="failed to get container status \"0d2fb2d2e7170b869343d27211ca7b23c3ec670302a02b817afbcec3f94a5e90\": rpc error: code = NotFound desc = could not find container \"0d2fb2d2e7170b869343d27211ca7b23c3ec670302a02b817afbcec3f94a5e90\": container with ID starting with 0d2fb2d2e7170b869343d27211ca7b23c3ec670302a02b817afbcec3f94a5e90 not found: ID does not exist" Feb 03 00:40:06 crc kubenswrapper[4798]: I0203 00:40:06.366960 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lx4bg\" (UniqueName: \"kubernetes.io/projected/84948ae5-1010-43d2-8d49-a8526076252d-kube-api-access-lx4bg\") pod \"84948ae5-1010-43d2-8d49-a8526076252d\" (UID: \"84948ae5-1010-43d2-8d49-a8526076252d\") " Feb 03 00:40:06 crc kubenswrapper[4798]: I0203 00:40:06.374511 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84948ae5-1010-43d2-8d49-a8526076252d-kube-api-access-lx4bg" (OuterVolumeSpecName: "kube-api-access-lx4bg") pod "84948ae5-1010-43d2-8d49-a8526076252d" (UID: "84948ae5-1010-43d2-8d49-a8526076252d"). InnerVolumeSpecName "kube-api-access-lx4bg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:40:06 crc kubenswrapper[4798]: I0203 00:40:06.468628 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lx4bg\" (UniqueName: \"kubernetes.io/projected/84948ae5-1010-43d2-8d49-a8526076252d-kube-api-access-lx4bg\") on node \"crc\" DevicePath \"\"" Feb 03 00:40:06 crc kubenswrapper[4798]: I0203 00:40:06.652182 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-7n2jm"] Feb 03 00:40:06 crc kubenswrapper[4798]: I0203 00:40:06.664189 4798 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-7n2jm"] Feb 03 00:40:06 crc kubenswrapper[4798]: I0203 00:40:06.922866 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84948ae5-1010-43d2-8d49-a8526076252d" path="/var/lib/kubelet/pods/84948ae5-1010-43d2-8d49-a8526076252d/volumes" Feb 03 00:40:35 crc kubenswrapper[4798]: I0203 00:40:35.288122 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l7b97"] Feb 03 00:40:35 crc kubenswrapper[4798]: E0203 00:40:35.289822 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84948ae5-1010-43d2-8d49-a8526076252d" containerName="registry-server" Feb 03 00:40:35 crc kubenswrapper[4798]: I0203 00:40:35.289903 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="84948ae5-1010-43d2-8d49-a8526076252d" containerName="registry-server" Feb 03 00:40:35 crc kubenswrapper[4798]: I0203 00:40:35.290081 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="84948ae5-1010-43d2-8d49-a8526076252d" containerName="registry-server" Feb 03 00:40:35 crc kubenswrapper[4798]: I0203 00:40:35.291005 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l7b97" Feb 03 00:40:35 crc kubenswrapper[4798]: I0203 00:40:35.316692 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l7b97"] Feb 03 00:40:35 crc kubenswrapper[4798]: I0203 00:40:35.414406 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8ea2547-cfa8-4513-a456-f4528a44ee7b-utilities\") pod \"certified-operators-l7b97\" (UID: \"e8ea2547-cfa8-4513-a456-f4528a44ee7b\") " pod="openshift-marketplace/certified-operators-l7b97" Feb 03 00:40:35 crc kubenswrapper[4798]: I0203 00:40:35.414488 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhq46\" (UniqueName: \"kubernetes.io/projected/e8ea2547-cfa8-4513-a456-f4528a44ee7b-kube-api-access-qhq46\") pod \"certified-operators-l7b97\" (UID: \"e8ea2547-cfa8-4513-a456-f4528a44ee7b\") " pod="openshift-marketplace/certified-operators-l7b97" Feb 03 00:40:35 crc kubenswrapper[4798]: I0203 00:40:35.414524 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8ea2547-cfa8-4513-a456-f4528a44ee7b-catalog-content\") pod \"certified-operators-l7b97\" (UID: \"e8ea2547-cfa8-4513-a456-f4528a44ee7b\") " pod="openshift-marketplace/certified-operators-l7b97" Feb 03 00:40:35 crc kubenswrapper[4798]: I0203 00:40:35.515920 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhq46\" (UniqueName: \"kubernetes.io/projected/e8ea2547-cfa8-4513-a456-f4528a44ee7b-kube-api-access-qhq46\") pod \"certified-operators-l7b97\" (UID: \"e8ea2547-cfa8-4513-a456-f4528a44ee7b\") " pod="openshift-marketplace/certified-operators-l7b97" Feb 03 00:40:35 crc kubenswrapper[4798]: I0203 00:40:35.515980 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8ea2547-cfa8-4513-a456-f4528a44ee7b-catalog-content\") pod \"certified-operators-l7b97\" (UID: \"e8ea2547-cfa8-4513-a456-f4528a44ee7b\") " pod="openshift-marketplace/certified-operators-l7b97" Feb 03 00:40:35 crc kubenswrapper[4798]: I0203 00:40:35.516141 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8ea2547-cfa8-4513-a456-f4528a44ee7b-utilities\") pod \"certified-operators-l7b97\" (UID: \"e8ea2547-cfa8-4513-a456-f4528a44ee7b\") " pod="openshift-marketplace/certified-operators-l7b97" Feb 03 00:40:35 crc kubenswrapper[4798]: I0203 00:40:35.516761 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8ea2547-cfa8-4513-a456-f4528a44ee7b-utilities\") pod \"certified-operators-l7b97\" (UID: \"e8ea2547-cfa8-4513-a456-f4528a44ee7b\") " pod="openshift-marketplace/certified-operators-l7b97" Feb 03 00:40:35 crc kubenswrapper[4798]: I0203 00:40:35.517360 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8ea2547-cfa8-4513-a456-f4528a44ee7b-catalog-content\") pod \"certified-operators-l7b97\" (UID: \"e8ea2547-cfa8-4513-a456-f4528a44ee7b\") " pod="openshift-marketplace/certified-operators-l7b97" Feb 03 00:40:35 crc kubenswrapper[4798]: I0203 00:40:35.540193 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhq46\" (UniqueName: \"kubernetes.io/projected/e8ea2547-cfa8-4513-a456-f4528a44ee7b-kube-api-access-qhq46\") pod \"certified-operators-l7b97\" (UID: \"e8ea2547-cfa8-4513-a456-f4528a44ee7b\") " pod="openshift-marketplace/certified-operators-l7b97" Feb 03 00:40:35 crc kubenswrapper[4798]: I0203 00:40:35.613076 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l7b97" Feb 03 00:40:35 crc kubenswrapper[4798]: I0203 00:40:35.914696 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l7b97"] Feb 03 00:40:36 crc kubenswrapper[4798]: I0203 00:40:36.540557 4798 generic.go:334] "Generic (PLEG): container finished" podID="e8ea2547-cfa8-4513-a456-f4528a44ee7b" containerID="53841d518df77c1e4fb13570c4aa359bcef4a5d8e8e5e8ba1608c537ece798da" exitCode=0 Feb 03 00:40:36 crc kubenswrapper[4798]: I0203 00:40:36.540615 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l7b97" event={"ID":"e8ea2547-cfa8-4513-a456-f4528a44ee7b","Type":"ContainerDied","Data":"53841d518df77c1e4fb13570c4aa359bcef4a5d8e8e5e8ba1608c537ece798da"} Feb 03 00:40:36 crc kubenswrapper[4798]: I0203 00:40:36.540908 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l7b97" event={"ID":"e8ea2547-cfa8-4513-a456-f4528a44ee7b","Type":"ContainerStarted","Data":"353f65e56fa1b4bf9086b59b7f36cb0395105fd5a1f4d14cf13dd42b76440141"} Feb 03 00:40:37 crc kubenswrapper[4798]: I0203 00:40:37.552069 4798 generic.go:334] "Generic (PLEG): container finished" podID="e8ea2547-cfa8-4513-a456-f4528a44ee7b" containerID="e703e8d569b9ca5c6635e60ebac9a9cb7fb76e4b4b671a7180349b68b6aa569a" exitCode=0 Feb 03 00:40:37 crc kubenswrapper[4798]: I0203 00:40:37.552335 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l7b97" event={"ID":"e8ea2547-cfa8-4513-a456-f4528a44ee7b","Type":"ContainerDied","Data":"e703e8d569b9ca5c6635e60ebac9a9cb7fb76e4b4b671a7180349b68b6aa569a"} Feb 03 00:40:38 crc kubenswrapper[4798]: I0203 00:40:38.563628 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l7b97" event={"ID":"e8ea2547-cfa8-4513-a456-f4528a44ee7b","Type":"ContainerStarted","Data":"a9e767a334060bf6417db88ab44035c68b7870bb8b54f9c9999cf6287d6e65ad"} Feb 03 00:40:38 crc kubenswrapper[4798]: I0203 00:40:38.592467 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l7b97" podStartSLOduration=2.146926665 podStartE2EDuration="3.592445256s" podCreationTimestamp="2026-02-03 00:40:35 +0000 UTC" firstStartedPulling="2026-02-03 00:40:36.541820173 +0000 UTC m=+1528.307810184" lastFinishedPulling="2026-02-03 00:40:37.987338744 +0000 UTC m=+1529.753328775" observedRunningTime="2026-02-03 00:40:38.586423399 +0000 UTC m=+1530.352413410" watchObservedRunningTime="2026-02-03 00:40:38.592445256 +0000 UTC m=+1530.358435257" Feb 03 00:40:45 crc kubenswrapper[4798]: I0203 00:40:45.614731 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l7b97" Feb 03 00:40:45 crc kubenswrapper[4798]: I0203 00:40:45.615537 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l7b97" Feb 03 00:40:45 crc kubenswrapper[4798]: I0203 00:40:45.659825 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l7b97" Feb 03 00:40:45 crc kubenswrapper[4798]: I0203 00:40:45.717547 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l7b97" Feb 03 00:40:45 crc kubenswrapper[4798]: I0203 00:40:45.909605 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l7b97"] Feb 03 00:40:47 crc kubenswrapper[4798]: I0203 00:40:47.641066 4798 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-l7b97" podUID="e8ea2547-cfa8-4513-a456-f4528a44ee7b" containerName="registry-server" containerID="cri-o://a9e767a334060bf6417db88ab44035c68b7870bb8b54f9c9999cf6287d6e65ad" gracePeriod=2 Feb 03 00:40:48 crc kubenswrapper[4798]: I0203 00:40:48.539589 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l7b97" Feb 03 00:40:48 crc kubenswrapper[4798]: I0203 00:40:48.648272 4798 generic.go:334] "Generic (PLEG): container finished" podID="e8ea2547-cfa8-4513-a456-f4528a44ee7b" containerID="a9e767a334060bf6417db88ab44035c68b7870bb8b54f9c9999cf6287d6e65ad" exitCode=0 Feb 03 00:40:48 crc kubenswrapper[4798]: I0203 00:40:48.648329 4798 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l7b97" Feb 03 00:40:48 crc kubenswrapper[4798]: I0203 00:40:48.648338 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l7b97" event={"ID":"e8ea2547-cfa8-4513-a456-f4528a44ee7b","Type":"ContainerDied","Data":"a9e767a334060bf6417db88ab44035c68b7870bb8b54f9c9999cf6287d6e65ad"} Feb 03 00:40:48 crc kubenswrapper[4798]: I0203 00:40:48.649548 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l7b97" event={"ID":"e8ea2547-cfa8-4513-a456-f4528a44ee7b","Type":"ContainerDied","Data":"353f65e56fa1b4bf9086b59b7f36cb0395105fd5a1f4d14cf13dd42b76440141"} Feb 03 00:40:48 crc kubenswrapper[4798]: I0203 00:40:48.649566 4798 scope.go:117] "RemoveContainer" containerID="a9e767a334060bf6417db88ab44035c68b7870bb8b54f9c9999cf6287d6e65ad" Feb 03 00:40:48 crc kubenswrapper[4798]: I0203 00:40:48.664071 4798 scope.go:117] "RemoveContainer" containerID="e703e8d569b9ca5c6635e60ebac9a9cb7fb76e4b4b671a7180349b68b6aa569a" Feb 03 00:40:48 crc kubenswrapper[4798]: I0203 00:40:48.677060 4798 scope.go:117] "RemoveContainer" containerID="53841d518df77c1e4fb13570c4aa359bcef4a5d8e8e5e8ba1608c537ece798da" Feb 03 00:40:48 crc kubenswrapper[4798]: I0203 00:40:48.706881 4798 scope.go:117] "RemoveContainer" containerID="a9e767a334060bf6417db88ab44035c68b7870bb8b54f9c9999cf6287d6e65ad" Feb 03 00:40:48 crc kubenswrapper[4798]: E0203 00:40:48.708607 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9e767a334060bf6417db88ab44035c68b7870bb8b54f9c9999cf6287d6e65ad\": container with ID starting with a9e767a334060bf6417db88ab44035c68b7870bb8b54f9c9999cf6287d6e65ad not found: ID does not exist" containerID="a9e767a334060bf6417db88ab44035c68b7870bb8b54f9c9999cf6287d6e65ad" Feb 03 00:40:48 crc kubenswrapper[4798]: I0203 00:40:48.708759 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9e767a334060bf6417db88ab44035c68b7870bb8b54f9c9999cf6287d6e65ad"} err="failed to get container status \"a9e767a334060bf6417db88ab44035c68b7870bb8b54f9c9999cf6287d6e65ad\": rpc error: code = NotFound desc = could not find container \"a9e767a334060bf6417db88ab44035c68b7870bb8b54f9c9999cf6287d6e65ad\": container with ID starting with a9e767a334060bf6417db88ab44035c68b7870bb8b54f9c9999cf6287d6e65ad not found: ID does not exist" Feb 03 00:40:48 crc kubenswrapper[4798]: I0203 00:40:48.708855 4798 scope.go:117] "RemoveContainer" containerID="e703e8d569b9ca5c6635e60ebac9a9cb7fb76e4b4b671a7180349b68b6aa569a" Feb 03 00:40:48 crc kubenswrapper[4798]: E0203 00:40:48.709323 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e703e8d569b9ca5c6635e60ebac9a9cb7fb76e4b4b671a7180349b68b6aa569a\": container with ID starting with e703e8d569b9ca5c6635e60ebac9a9cb7fb76e4b4b671a7180349b68b6aa569a not found: ID does not exist" containerID="e703e8d569b9ca5c6635e60ebac9a9cb7fb76e4b4b671a7180349b68b6aa569a" Feb 03 00:40:48 crc kubenswrapper[4798]: I0203 00:40:48.709361 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e703e8d569b9ca5c6635e60ebac9a9cb7fb76e4b4b671a7180349b68b6aa569a"} err="failed to get container status \"e703e8d569b9ca5c6635e60ebac9a9cb7fb76e4b4b671a7180349b68b6aa569a\": rpc error: code = NotFound desc = could not find container \"e703e8d569b9ca5c6635e60ebac9a9cb7fb76e4b4b671a7180349b68b6aa569a\": container with ID starting with e703e8d569b9ca5c6635e60ebac9a9cb7fb76e4b4b671a7180349b68b6aa569a not found: ID does not exist" Feb 03 00:40:48 crc kubenswrapper[4798]: I0203 00:40:48.709383 4798 scope.go:117] "RemoveContainer" containerID="53841d518df77c1e4fb13570c4aa359bcef4a5d8e8e5e8ba1608c537ece798da" Feb 03 00:40:48 crc kubenswrapper[4798]: E0203 00:40:48.709694 4798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53841d518df77c1e4fb13570c4aa359bcef4a5d8e8e5e8ba1608c537ece798da\": container with ID starting with 53841d518df77c1e4fb13570c4aa359bcef4a5d8e8e5e8ba1608c537ece798da not found: ID does not exist" containerID="53841d518df77c1e4fb13570c4aa359bcef4a5d8e8e5e8ba1608c537ece798da" Feb 03 00:40:48 crc kubenswrapper[4798]: I0203 00:40:48.709804 4798 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53841d518df77c1e4fb13570c4aa359bcef4a5d8e8e5e8ba1608c537ece798da"} err="failed to get container status \"53841d518df77c1e4fb13570c4aa359bcef4a5d8e8e5e8ba1608c537ece798da\": rpc error: code = NotFound desc = could not find container \"53841d518df77c1e4fb13570c4aa359bcef4a5d8e8e5e8ba1608c537ece798da\": container with ID starting with 53841d518df77c1e4fb13570c4aa359bcef4a5d8e8e5e8ba1608c537ece798da not found: ID does not exist" Feb 03 00:40:48 crc kubenswrapper[4798]: I0203 00:40:48.739933 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8ea2547-cfa8-4513-a456-f4528a44ee7b-utilities\") pod \"e8ea2547-cfa8-4513-a456-f4528a44ee7b\" (UID: \"e8ea2547-cfa8-4513-a456-f4528a44ee7b\") " Feb 03 00:40:48 crc kubenswrapper[4798]: I0203 00:40:48.739994 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhq46\" (UniqueName: \"kubernetes.io/projected/e8ea2547-cfa8-4513-a456-f4528a44ee7b-kube-api-access-qhq46\") pod \"e8ea2547-cfa8-4513-a456-f4528a44ee7b\" (UID: \"e8ea2547-cfa8-4513-a456-f4528a44ee7b\") " Feb 03 00:40:48 crc kubenswrapper[4798]: I0203 00:40:48.740019 4798 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8ea2547-cfa8-4513-a456-f4528a44ee7b-catalog-content\") pod \"e8ea2547-cfa8-4513-a456-f4528a44ee7b\" (UID: \"e8ea2547-cfa8-4513-a456-f4528a44ee7b\") " Feb 03 00:40:48 crc kubenswrapper[4798]: I0203 00:40:48.740952 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8ea2547-cfa8-4513-a456-f4528a44ee7b-utilities" (OuterVolumeSpecName: "utilities") pod "e8ea2547-cfa8-4513-a456-f4528a44ee7b" (UID: "e8ea2547-cfa8-4513-a456-f4528a44ee7b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:40:48 crc kubenswrapper[4798]: I0203 00:40:48.746080 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8ea2547-cfa8-4513-a456-f4528a44ee7b-kube-api-access-qhq46" (OuterVolumeSpecName: "kube-api-access-qhq46") pod "e8ea2547-cfa8-4513-a456-f4528a44ee7b" (UID: "e8ea2547-cfa8-4513-a456-f4528a44ee7b"). InnerVolumeSpecName "kube-api-access-qhq46". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 00:40:48 crc kubenswrapper[4798]: I0203 00:40:48.793572 4798 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8ea2547-cfa8-4513-a456-f4528a44ee7b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e8ea2547-cfa8-4513-a456-f4528a44ee7b" (UID: "e8ea2547-cfa8-4513-a456-f4528a44ee7b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 00:40:48 crc kubenswrapper[4798]: I0203 00:40:48.841701 4798 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8ea2547-cfa8-4513-a456-f4528a44ee7b-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 00:40:48 crc kubenswrapper[4798]: I0203 00:40:48.841759 4798 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhq46\" (UniqueName: \"kubernetes.io/projected/e8ea2547-cfa8-4513-a456-f4528a44ee7b-kube-api-access-qhq46\") on node \"crc\" DevicePath \"\"" Feb 03 00:40:48 crc kubenswrapper[4798]: I0203 00:40:48.841774 4798 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8ea2547-cfa8-4513-a456-f4528a44ee7b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 00:40:48 crc kubenswrapper[4798]: I0203 00:40:48.978933 4798 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l7b97"] Feb 03 00:40:48 crc kubenswrapper[4798]: I0203 00:40:48.988096 4798 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-l7b97"] Feb 03 00:40:50 crc kubenswrapper[4798]: I0203 00:40:50.916499 4798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8ea2547-cfa8-4513-a456-f4528a44ee7b" path="/var/lib/kubelet/pods/e8ea2547-cfa8-4513-a456-f4528a44ee7b/volumes" Feb 03 00:41:03 crc kubenswrapper[4798]: I0203 00:41:03.193445 4798 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pgxdd"] Feb 03 00:41:03 crc kubenswrapper[4798]: E0203 00:41:03.194519 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8ea2547-cfa8-4513-a456-f4528a44ee7b" containerName="registry-server" Feb 03 00:41:03 crc kubenswrapper[4798]: I0203 00:41:03.194568 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8ea2547-cfa8-4513-a456-f4528a44ee7b" containerName="registry-server" Feb 03 00:41:03 crc kubenswrapper[4798]: E0203 00:41:03.194593 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8ea2547-cfa8-4513-a456-f4528a44ee7b" containerName="extract-utilities" Feb 03 00:41:03 crc kubenswrapper[4798]: I0203 00:41:03.194602 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8ea2547-cfa8-4513-a456-f4528a44ee7b" containerName="extract-utilities" Feb 03 00:41:03 crc kubenswrapper[4798]: E0203 00:41:03.194629 4798 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8ea2547-cfa8-4513-a456-f4528a44ee7b" containerName="extract-content" Feb 03 00:41:03 crc kubenswrapper[4798]: I0203 00:41:03.194637 4798 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8ea2547-cfa8-4513-a456-f4528a44ee7b" containerName="extract-content" Feb 03 00:41:03 crc kubenswrapper[4798]: I0203 00:41:03.194808 4798 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8ea2547-cfa8-4513-a456-f4528a44ee7b" containerName="registry-server" Feb 03 00:41:03 crc kubenswrapper[4798]: I0203 00:41:03.196238 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pgxdd" Feb 03 00:41:03 crc kubenswrapper[4798]: I0203 00:41:03.214641 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pgxdd"] Feb 03 00:41:03 crc kubenswrapper[4798]: I0203 00:41:03.270044 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/636688fd-86c0-4229-90b7-69af7bf7ee6c-utilities\") pod \"redhat-operators-pgxdd\" (UID: \"636688fd-86c0-4229-90b7-69af7bf7ee6c\") " pod="openshift-marketplace/redhat-operators-pgxdd" Feb 03 00:41:03 crc kubenswrapper[4798]: I0203 00:41:03.270105 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rbvm\" (UniqueName: \"kubernetes.io/projected/636688fd-86c0-4229-90b7-69af7bf7ee6c-kube-api-access-6rbvm\") pod \"redhat-operators-pgxdd\" (UID: \"636688fd-86c0-4229-90b7-69af7bf7ee6c\") " pod="openshift-marketplace/redhat-operators-pgxdd" Feb 03 00:41:03 crc kubenswrapper[4798]: I0203 00:41:03.270152 4798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/636688fd-86c0-4229-90b7-69af7bf7ee6c-catalog-content\") pod \"redhat-operators-pgxdd\" (UID: \"636688fd-86c0-4229-90b7-69af7bf7ee6c\") " pod="openshift-marketplace/redhat-operators-pgxdd" Feb 03 00:41:03 crc kubenswrapper[4798]: I0203 00:41:03.371151 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/636688fd-86c0-4229-90b7-69af7bf7ee6c-utilities\") pod \"redhat-operators-pgxdd\" (UID: \"636688fd-86c0-4229-90b7-69af7bf7ee6c\") " pod="openshift-marketplace/redhat-operators-pgxdd" Feb 03 00:41:03 crc kubenswrapper[4798]: I0203 00:41:03.371204 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rbvm\" (UniqueName: \"kubernetes.io/projected/636688fd-86c0-4229-90b7-69af7bf7ee6c-kube-api-access-6rbvm\") pod \"redhat-operators-pgxdd\" (UID: \"636688fd-86c0-4229-90b7-69af7bf7ee6c\") " pod="openshift-marketplace/redhat-operators-pgxdd" Feb 03 00:41:03 crc kubenswrapper[4798]: I0203 00:41:03.371235 4798 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/636688fd-86c0-4229-90b7-69af7bf7ee6c-catalog-content\") pod \"redhat-operators-pgxdd\" (UID: \"636688fd-86c0-4229-90b7-69af7bf7ee6c\") " pod="openshift-marketplace/redhat-operators-pgxdd" Feb 03 00:41:03 crc kubenswrapper[4798]: I0203 00:41:03.371596 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/636688fd-86c0-4229-90b7-69af7bf7ee6c-utilities\") pod \"redhat-operators-pgxdd\" (UID: \"636688fd-86c0-4229-90b7-69af7bf7ee6c\") " pod="openshift-marketplace/redhat-operators-pgxdd" Feb 03 00:41:03 crc kubenswrapper[4798]: I0203 00:41:03.371614 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/636688fd-86c0-4229-90b7-69af7bf7ee6c-catalog-content\") pod \"redhat-operators-pgxdd\" (UID: \"636688fd-86c0-4229-90b7-69af7bf7ee6c\") " pod="openshift-marketplace/redhat-operators-pgxdd" Feb 03 00:41:03 crc kubenswrapper[4798]: I0203 00:41:03.390845 4798 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rbvm\" (UniqueName: \"kubernetes.io/projected/636688fd-86c0-4229-90b7-69af7bf7ee6c-kube-api-access-6rbvm\") pod \"redhat-operators-pgxdd\" (UID: \"636688fd-86c0-4229-90b7-69af7bf7ee6c\") " pod="openshift-marketplace/redhat-operators-pgxdd" Feb 03 00:41:03 crc kubenswrapper[4798]: I0203 00:41:03.525358 4798 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pgxdd" Feb 03 00:41:03 crc kubenswrapper[4798]: I0203 00:41:03.959329 4798 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pgxdd"] Feb 03 00:41:04 crc kubenswrapper[4798]: I0203 00:41:04.785067 4798 generic.go:334] "Generic (PLEG): container finished" podID="636688fd-86c0-4229-90b7-69af7bf7ee6c" containerID="42c284d601335cdf6fe2933582eceae442268e08591dc9add84765177db7dd9b" exitCode=0 Feb 03 00:41:04 crc kubenswrapper[4798]: I0203 00:41:04.785174 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pgxdd" event={"ID":"636688fd-86c0-4229-90b7-69af7bf7ee6c","Type":"ContainerDied","Data":"42c284d601335cdf6fe2933582eceae442268e08591dc9add84765177db7dd9b"} Feb 03 00:41:04 crc kubenswrapper[4798]: I0203 00:41:04.785410 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pgxdd" event={"ID":"636688fd-86c0-4229-90b7-69af7bf7ee6c","Type":"ContainerStarted","Data":"a3e84e055aa232d62c851968304ba300d0a8f8854723eae72c6179ce12026c4d"} Feb 03 00:41:05 crc kubenswrapper[4798]: I0203 00:41:05.795143 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pgxdd" event={"ID":"636688fd-86c0-4229-90b7-69af7bf7ee6c","Type":"ContainerStarted","Data":"75c086b9a1d79c4c7906e68c1664950f6e1f8dbe665d993c897449b37c52b7e8"} Feb 03 00:41:06 crc kubenswrapper[4798]: I0203 00:41:06.810183 4798 generic.go:334] "Generic (PLEG): container finished" podID="636688fd-86c0-4229-90b7-69af7bf7ee6c" containerID="75c086b9a1d79c4c7906e68c1664950f6e1f8dbe665d993c897449b37c52b7e8" exitCode=0 Feb 03 00:41:06 crc kubenswrapper[4798]: I0203 00:41:06.810248 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pgxdd" event={"ID":"636688fd-86c0-4229-90b7-69af7bf7ee6c","Type":"ContainerDied","Data":"75c086b9a1d79c4c7906e68c1664950f6e1f8dbe665d993c897449b37c52b7e8"} Feb 03 00:41:07 crc kubenswrapper[4798]: I0203 00:41:07.818586 4798 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pgxdd" event={"ID":"636688fd-86c0-4229-90b7-69af7bf7ee6c","Type":"ContainerStarted","Data":"3e8f0f7a52267cbb55b924ff47750b9390321cd67060faf7b005e137b45b8743"} Feb 03 00:41:07 crc kubenswrapper[4798]: I0203 00:41:07.844023 4798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pgxdd" podStartSLOduration=2.425664147 podStartE2EDuration="4.843991148s" podCreationTimestamp="2026-02-03 00:41:03 +0000 UTC" firstStartedPulling="2026-02-03 00:41:04.786566129 +0000 UTC m=+1556.552556140" lastFinishedPulling="2026-02-03 00:41:07.20489313 +0000 UTC m=+1558.970883141" observedRunningTime="2026-02-03 00:41:07.840242286 +0000 UTC m=+1559.606232327" watchObservedRunningTime="2026-02-03 00:41:07.843991148 +0000 UTC m=+1559.609981209" Feb 03 00:41:13 crc kubenswrapper[4798]: I0203 00:41:13.526628 4798 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pgxdd" Feb 03 00:41:13 crc kubenswrapper[4798]: I0203 00:41:13.528364 4798 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pgxdd" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515140242064024443 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015140242064017360 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015140236511016503 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015140236511015453 5ustar corecore